Resume and JobRESUME AND JOB
NVIDIA logo

Principal Software Engineer - Inference as a Service

NVIDIA

Software and Technology Jobs

Principal Software Engineer - Inference as a Service

full-timePosted: Aug 20, 2025

Job Description

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.We are seeking a Principal Software Engineer to join our Software Infrastructure Team in Santa Clara, CA. This team is at the heart of the NVIDIA AI Factory initiative, building and maintaining the core infrastructure that powers our closed and open source AI models. In this role, you will be a key leader in designing and developing our Inference as a Service platform, creating the systems that manage GPU resources, ensure service stability, and deliver high-performance, low-latency inference at a massive scale.What you'll be doing:Lead the design and development of a scalable, robust, and reliable platform for serving AI models for inference as a service.Architect and implement systems for dynamic GPU resource management, autoscaling, and efficient scheduling of inference workloads.Build and maintain the core infrastructure, including load balancing and rate limiting, to ensure the stability and high availability of inference services.Define and implement APIs for model deployment, monitoring, and management for a seamless user experience.Optimize system performance and latency for various model types, from large language models (LLMs) to computer vision models, ensuring high-throughput and responsiveness.Collaborate with engineering teams to integrate deployment, monitoring, and performance telemetry into our CI/CD pipelines.Develop tools and frameworks for real-time observability, performance profiling, and debugging of inference services.Drive architectural decisions and best practices for long-term platform evolution and scalability.Contribute to NVIDIA's AI Factory initiative by building a foundational platform that supports model serving needs.What we need to see:15+ years of software engineering experience with deep expertise in distributed systems or large-scale backend infrastructure.BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering or related fields (or equivalent experience)Strong programming skills in Python, Go, or C++ with a track record of building production-grade, highly available systems.Proven experience with container orchestration technologies like Kubernetes.A deep understanding of system architecture for high-performance, low-latency API services.Experience in designing, implementing, and optimizing systems for GPU resource management.Familiarity with modern observability tools (e.g., DataDog, Prometheus, Grafana, OpenTelemetry).Demonstrated experience with deployment strategies and CI/CD pipelines.Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment.Ways to stand out from the crowd:Experience with specialized inference serving frameworks.Open-source contributions to projects in the AI/ML, distributed systems, or infrastructure space.Hands-on experience with performance optimization techniques for AI models, such as quantization or model compression.Expertise in building platforms that support a wide variety of AI model architectures.Strong understanding of the full lifecycle of an AI model, from training to deployment and serving.NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and passionate people on the planet working for us. If you're creative and autonomous, we want to hear from you!Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 248,000 USD - 391,000 USD.You will also be eligible for equity and benefits.Applications for this job will be accepted at least until August 24, 2025.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Locations

  • Santa Clara, CA, US

Salary

Estimated Salary Rangemedium confidence

35,000,000 - 55,000,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Software Engineeringintermediate
  • Platform Designintermediate
  • System Architectureintermediate
  • GPU Resource Managementintermediate
  • Autoscalingintermediate
  • Workload Schedulingintermediate
  • Load Balancingintermediate
  • Rate Limitingintermediate
  • API Designintermediate
  • Model Deploymentintermediate
  • System Monitoringintermediate
  • Performance Optimizationintermediate
  • Latency Optimizationintermediate
  • High-Throughput Systemsintermediate
  • Large Language Models (LLMs)intermediate
  • Computer Vision Modelsintermediate
  • Inference as a Serviceintermediate
  • Leadershipintermediate
  • Collaborationintermediate

Target Your Resume for "Principal Software Engineer - Inference as a Service" , NVIDIA

Get personalized recommendations to optimize your resume specifically for Principal Software Engineer - Inference as a Service. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Principal Software Engineer - Inference as a Service" , NVIDIA

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

United States of America

Answer 10 quick questions to check your fit for Principal Software Engineer - Inference as a Service @ NVIDIA.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

NVIDIA logo

Principal Software Engineer - Inference as a Service

NVIDIA

Software and Technology Jobs

Principal Software Engineer - Inference as a Service

full-timePosted: Aug 20, 2025

Job Description

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.We are seeking a Principal Software Engineer to join our Software Infrastructure Team in Santa Clara, CA. This team is at the heart of the NVIDIA AI Factory initiative, building and maintaining the core infrastructure that powers our closed and open source AI models. In this role, you will be a key leader in designing and developing our Inference as a Service platform, creating the systems that manage GPU resources, ensure service stability, and deliver high-performance, low-latency inference at a massive scale.What you'll be doing:Lead the design and development of a scalable, robust, and reliable platform for serving AI models for inference as a service.Architect and implement systems for dynamic GPU resource management, autoscaling, and efficient scheduling of inference workloads.Build and maintain the core infrastructure, including load balancing and rate limiting, to ensure the stability and high availability of inference services.Define and implement APIs for model deployment, monitoring, and management for a seamless user experience.Optimize system performance and latency for various model types, from large language models (LLMs) to computer vision models, ensuring high-throughput and responsiveness.Collaborate with engineering teams to integrate deployment, monitoring, and performance telemetry into our CI/CD pipelines.Develop tools and frameworks for real-time observability, performance profiling, and debugging of inference services.Drive architectural decisions and best practices for long-term platform evolution and scalability.Contribute to NVIDIA's AI Factory initiative by building a foundational platform that supports model serving needs.What we need to see:15+ years of software engineering experience with deep expertise in distributed systems or large-scale backend infrastructure.BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering or related fields (or equivalent experience)Strong programming skills in Python, Go, or C++ with a track record of building production-grade, highly available systems.Proven experience with container orchestration technologies like Kubernetes.A deep understanding of system architecture for high-performance, low-latency API services.Experience in designing, implementing, and optimizing systems for GPU resource management.Familiarity with modern observability tools (e.g., DataDog, Prometheus, Grafana, OpenTelemetry).Demonstrated experience with deployment strategies and CI/CD pipelines.Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment.Ways to stand out from the crowd:Experience with specialized inference serving frameworks.Open-source contributions to projects in the AI/ML, distributed systems, or infrastructure space.Hands-on experience with performance optimization techniques for AI models, such as quantization or model compression.Expertise in building platforms that support a wide variety of AI model architectures.Strong understanding of the full lifecycle of an AI model, from training to deployment and serving.NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and passionate people on the planet working for us. If you're creative and autonomous, we want to hear from you!Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 248,000 USD - 391,000 USD.You will also be eligible for equity and benefits.Applications for this job will be accepted at least until August 24, 2025.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Locations

  • Santa Clara, CA, US

Salary

Estimated Salary Rangemedium confidence

35,000,000 - 55,000,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Software Engineeringintermediate
  • Platform Designintermediate
  • System Architectureintermediate
  • GPU Resource Managementintermediate
  • Autoscalingintermediate
  • Workload Schedulingintermediate
  • Load Balancingintermediate
  • Rate Limitingintermediate
  • API Designintermediate
  • Model Deploymentintermediate
  • System Monitoringintermediate
  • Performance Optimizationintermediate
  • Latency Optimizationintermediate
  • High-Throughput Systemsintermediate
  • Large Language Models (LLMs)intermediate
  • Computer Vision Modelsintermediate
  • Inference as a Serviceintermediate
  • Leadershipintermediate
  • Collaborationintermediate

Target Your Resume for "Principal Software Engineer - Inference as a Service" , NVIDIA

Get personalized recommendations to optimize your resume specifically for Principal Software Engineer - Inference as a Service. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Principal Software Engineer - Inference as a Service" , NVIDIA

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

United States of America

Answer 10 quick questions to check your fit for Principal Software Engineer - Inference as a Service @ NVIDIA.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.