Resume and JobRESUME AND JOB
NVIDIA logo

Solutions Architect, Inference Deployments

NVIDIA

Software and Technology Jobs

Solutions Architect, Inference Deployments

full-timePosted: Aug 10, 2025

Job Description

We’re forming a team of innovators to roll out and enhance AI inference solutions at scale, demonstrating NVIDIA’s GPU technology and Kubernetes. As a Solutions Architect (Inference Focus), you’ll collaborate closely with our engineering, DevOps, and customer success teams to foster enterprise AI adoption. Together, we'll introduce generative AI to production!What you'll be doing:Help customers craft, deploy, and maintain scalable, GPU-accelerated inference pipelines on Kubernetes for large language models (LLMs) and generative AI workloads.Enhance performance tuning using TensorRT/TensorRT-LLM, NVIDIA NIM, and Triton Inference Server to improve GPU utilization and model efficiency.Collaborate with multi-functional teams (engineering, product) and offer technical mentorship to customers implementing AI at scale.Architect zero-downtime deployments, autoscaling (e.g., HPA or equivalent experience with custom metrics), and integration with cloud-native tools (e.g., OpenTelemetry, Prometheus, Grafana).What we need to see:5+ Years in Solutions Architecture with a proven track record of moving AI inference from POC to production on Kubernetes.Experience architecting GPU allocation using NVIDIA GPU Operator and NVIDIA NIM Operator. Troubleshoot sophisticated GPU orchestration, optimize with Multi-Instance GPU (MIG), and ensure efficient utilization in Kubernetes environments.Proficiency with TensorRT-LLM, Triton, and TensorRT for model optimization and serving.Success stories optimizing LLMs for low-latency inference in enterprise environments.BS or equivalent experience in CS/Engineering.Ways to stand out from the crowd:Prior experience deploying NVIDIA NIM microservices for multi-model inference.Serverless Inference, knowledge of FaaS patterns (e.g., Google Cloud Run, AWS Lambda, NVCF) with NVIDIA GPUs.NVIDIA Certified AI Engineer or similar.Active contributions to Kubernetes SIGs or AI inference projects (e.g., KServe, Dynamo, SGLang or similar).Familiarity with networking concepts which support multi-node inference such as MPI, LWS or similar.Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 148,000 USD - 235,750 USD.You will also be eligible for equity and benefits.Applications for this job will be accepted at least until August 14, 2025.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Locations

  • Santa Clara, CA, US

Salary

Estimated Salary Rangemedium confidence

20,000,000 - 45,000,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • NVIDIA GPU technologyintermediate
  • Kubernetesintermediate
  • AI inference solutionsintermediate
  • GPU-accelerated inference pipelinesintermediate
  • TensorRTintermediate
  • TensorRT-LLMintermediate
  • NVIDIA NIMintermediate
  • Triton Inference Serverintermediate
  • performance tuningintermediate
  • GPU utilizationintermediate
  • model efficiencyintermediate
  • technical mentorshipintermediate
  • zero-downtime deploymentsintermediate
  • autoscalingintermediate
  • HPAintermediate
  • custom metricsintermediate
  • OpenTelemetryintermediate
  • Prometheusintermediate
  • Grafanaintermediate
  • Solutions Architectureintermediate
  • GPU allocationintermediate
  • NVIDIA GPU Operatorintermediate
  • NVIDIA NIM Operatorintermediate
  • GPU orchestrationintermediate
  • Multi-Instance GPU (MIG)intermediate
  • model optimizationintermediate
  • model servingintermediate
  • low-latency inferenceintermediate
  • LLMsintermediate
  • generative AI workloadsintermediate
  • NVIDIA NIM microservicesintermediate
  • multi-model inferenceintermediate
  • Serverless Inferenceintermediate
  • FaaS patternsintermediate
  • Google Cloud Runintermediate
  • AWS Lambdaintermediate
  • NVCFintermediate
  • NVIDIA Certified AI Engineerintermediate
  • Kubernetes SIGsintermediate
  • KServeintermediate
  • Dynamointermediate
  • SGLangintermediate
  • networkintermediate

Target Your Resume for "Solutions Architect, Inference Deployments" , NVIDIA

Get personalized recommendations to optimize your resume specifically for Solutions Architect, Inference Deployments. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Solutions Architect, Inference Deployments" , NVIDIA

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

United States of America

Answer 10 quick questions to check your fit for Solutions Architect, Inference Deployments @ NVIDIA.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

NVIDIA logo

Solutions Architect, Inference Deployments

NVIDIA

Software and Technology Jobs

Solutions Architect, Inference Deployments

full-timePosted: Aug 10, 2025

Job Description

We’re forming a team of innovators to roll out and enhance AI inference solutions at scale, demonstrating NVIDIA’s GPU technology and Kubernetes. As a Solutions Architect (Inference Focus), you’ll collaborate closely with our engineering, DevOps, and customer success teams to foster enterprise AI adoption. Together, we'll introduce generative AI to production!What you'll be doing:Help customers craft, deploy, and maintain scalable, GPU-accelerated inference pipelines on Kubernetes for large language models (LLMs) and generative AI workloads.Enhance performance tuning using TensorRT/TensorRT-LLM, NVIDIA NIM, and Triton Inference Server to improve GPU utilization and model efficiency.Collaborate with multi-functional teams (engineering, product) and offer technical mentorship to customers implementing AI at scale.Architect zero-downtime deployments, autoscaling (e.g., HPA or equivalent experience with custom metrics), and integration with cloud-native tools (e.g., OpenTelemetry, Prometheus, Grafana).What we need to see:5+ Years in Solutions Architecture with a proven track record of moving AI inference from POC to production on Kubernetes.Experience architecting GPU allocation using NVIDIA GPU Operator and NVIDIA NIM Operator. Troubleshoot sophisticated GPU orchestration, optimize with Multi-Instance GPU (MIG), and ensure efficient utilization in Kubernetes environments.Proficiency with TensorRT-LLM, Triton, and TensorRT for model optimization and serving.Success stories optimizing LLMs for low-latency inference in enterprise environments.BS or equivalent experience in CS/Engineering.Ways to stand out from the crowd:Prior experience deploying NVIDIA NIM microservices for multi-model inference.Serverless Inference, knowledge of FaaS patterns (e.g., Google Cloud Run, AWS Lambda, NVCF) with NVIDIA GPUs.NVIDIA Certified AI Engineer or similar.Active contributions to Kubernetes SIGs or AI inference projects (e.g., KServe, Dynamo, SGLang or similar).Familiarity with networking concepts which support multi-node inference such as MPI, LWS or similar.Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 148,000 USD - 235,750 USD.You will also be eligible for equity and benefits.Applications for this job will be accepted at least until August 14, 2025.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Locations

  • Santa Clara, CA, US

Salary

Estimated Salary Rangemedium confidence

20,000,000 - 45,000,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • NVIDIA GPU technologyintermediate
  • Kubernetesintermediate
  • AI inference solutionsintermediate
  • GPU-accelerated inference pipelinesintermediate
  • TensorRTintermediate
  • TensorRT-LLMintermediate
  • NVIDIA NIMintermediate
  • Triton Inference Serverintermediate
  • performance tuningintermediate
  • GPU utilizationintermediate
  • model efficiencyintermediate
  • technical mentorshipintermediate
  • zero-downtime deploymentsintermediate
  • autoscalingintermediate
  • HPAintermediate
  • custom metricsintermediate
  • OpenTelemetryintermediate
  • Prometheusintermediate
  • Grafanaintermediate
  • Solutions Architectureintermediate
  • GPU allocationintermediate
  • NVIDIA GPU Operatorintermediate
  • NVIDIA NIM Operatorintermediate
  • GPU orchestrationintermediate
  • Multi-Instance GPU (MIG)intermediate
  • model optimizationintermediate
  • model servingintermediate
  • low-latency inferenceintermediate
  • LLMsintermediate
  • generative AI workloadsintermediate
  • NVIDIA NIM microservicesintermediate
  • multi-model inferenceintermediate
  • Serverless Inferenceintermediate
  • FaaS patternsintermediate
  • Google Cloud Runintermediate
  • AWS Lambdaintermediate
  • NVCFintermediate
  • NVIDIA Certified AI Engineerintermediate
  • Kubernetes SIGsintermediate
  • KServeintermediate
  • Dynamointermediate
  • SGLangintermediate
  • networkintermediate

Target Your Resume for "Solutions Architect, Inference Deployments" , NVIDIA

Get personalized recommendations to optimize your resume specifically for Solutions Architect, Inference Deployments. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Solutions Architect, Inference Deployments" , NVIDIA

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

United States of America

Answer 10 quick questions to check your fit for Solutions Architect, Inference Deployments @ NVIDIA.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.