Resume and JobRESUME AND JOB
NVIDIA logo

Senior Software Engineer, AI Systems - vLLM and MLPerf

NVIDIA

Software and Technology Jobs

Senior Software Engineer, AI Systems - vLLM and MLPerf

full-timePosted: Oct 8, 2025

Job Description

We are seeking highly skilled and motivated software engineers to join our vLLM & MLPerf team. You will define and build benchmarks for MLPerf Inference, the industry-leading benchmark suite for inference system-level performance, as well as contribute to vLLM and optimize its performance to the extreme for those benchmarks on NVIDIA's latest GPUs.What you’ll be doing:Design and implement highly efficient inference systems for large-scale deployments of generative AI models.Define inference benchmarking methodologies and build tools that will be embraced across the industry.Develop, profile, debug, and optimize low-level system components and algorithms to enhance the throughput and the latency for the MLPerf Inference benchmarks on the newest NVIDIA GPUs.Productionize inference systems with uncompromised software quality.Collaborate with researchers and engineers to productionize trending model architectures, inference techniques and quantization methods.Contribute to the design of APIs, abstractions, and UX that make it easier to scale model deployment while maintaining usability and flexibility.Participate in design discussions, code reviews, and technical planning to ensure the product aligns with the business goals.Stay up to date with the latest advancements and come up with novel research ideas in inference system-level optimization, then translate research ideas into practical, robust systems. Explorations and academic publications are encouraged.What we need to see:Bachelor’s, Master’s, or PhD degree in Computer Science/Engineering, Software Engineering, a related field, or equivalent experience.5+ years of experience in software development, preferably with Python and C++.Deep understanding of deep learning algorithms, distributed systems, parallel computing, and high-performance computing principles.Hands-on experience with ML frameworks (e.g., PyTorch) and inference engines (e.g., vLLM and SGLang).Experience optimizing compute, memory, and communication performance for the deployments of large models.Familiarity with GPU programming, CUDA, NCCL, and performance profiling tools.Ability to work closely with both research and engineering teams, translating pioneering research ideas into concrete designs and robust code, as well as coming up with novel research ideas.Excellent problem-solving skills, with the ability to debug sophisticated systems.A passion for building high-impact software that pushes the boundaries of what’s possible with large-scale AI.Ways to stand out from the crowd:Background with building and optimizing LLM inference engines such as vLLM and SGLang.Experience building ML compilers such as Triton, Torch Dynamo/Inductor.Experience working with cloud platforms (e.g., AWS, GCP, or Azure), containerization tools (e.g., Docker), and orchestration infrastructures (e.g., Kubernetes, Slurm).Exposure to DevOps practices, CI/CD pipelines, and infrastructure as code.Contributions to open-source projects (please provide a list of the GitHub PRs you submitted).At NVIDIA, we believe artificial intelligence (AI) will fundamentally transform how people live and work. Our mission is to advance AI research and development to create groundbreaking technologies that enable anyone to harness the power of AI and benefit from its potential. Our team consists of experts in AI, systems and performance optimization. Our leadership includes world-renowned experts in AI systems who have received multiple academic and industry research awards.If you've hacked the inner workings of PyTorch, or if you've written many CUDA/HIP kernels, or if you've developed and optimized inference services or training workloads, or if you've built and maintained large-scale Kubernetes clusters, or if you simply just enjoy solving hard problems, feel free to drop an application!#LI-HybridYour base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.You will also be eligible for equity and benefits.Applications for this job will be accepted at least until October 12, 2025.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Locations

  • Santa Clara, CA, US

Salary

Estimated Salary Rangemedium confidence

21,000,000 - 33,600,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Pythonintermediate
  • C++intermediate
  • MLPerf Inferenceintermediate
  • vLLMintermediate
  • NVIDIA GPUsintermediate
  • generative AI modelsintermediate
  • inference benchmarking methodologiesintermediate
  • profilingintermediate
  • debuggingintermediate
  • low-level system components optimizationintermediate
  • throughput optimizationintermediate
  • latency optimizationintermediate
  • productionizing inference systemsintermediate
  • quantization methodsintermediate
  • API designintermediate
  • abstractions designintermediate
  • UX designintermediate
  • code reviewsintermediate
  • technical planningintermediate
  • inference system-level optimizationintermediate
  • deep learning algorithmsintermediate
  • distributed systemsintermediate
  • parallel computingintermediate
  • high-performance computingintermediate
  • PyTorchintermediate
  • SGLangintermediate
  • compute optimizationintermediate
  • memory optimizationintermediate
  • communication optimizationintermediate

Target Your Resume for "Senior Software Engineer, AI Systems - vLLM and MLPerf" , NVIDIA

Get personalized recommendations to optimize your resume specifically for Senior Software Engineer, AI Systems - vLLM and MLPerf. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Senior Software Engineer, AI Systems - vLLM and MLPerf" , NVIDIA

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

United States of America

Answer 10 quick questions to check your fit for Senior Software Engineer, AI Systems - vLLM and MLPerf @ NVIDIA.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

NVIDIA logo

Senior Software Engineer, AI Systems - vLLM and MLPerf

NVIDIA

Software and Technology Jobs

Senior Software Engineer, AI Systems - vLLM and MLPerf

full-timePosted: Oct 8, 2025

Job Description

We are seeking highly skilled and motivated software engineers to join our vLLM & MLPerf team. You will define and build benchmarks for MLPerf Inference, the industry-leading benchmark suite for inference system-level performance, as well as contribute to vLLM and optimize its performance to the extreme for those benchmarks on NVIDIA's latest GPUs.What you’ll be doing:Design and implement highly efficient inference systems for large-scale deployments of generative AI models.Define inference benchmarking methodologies and build tools that will be embraced across the industry.Develop, profile, debug, and optimize low-level system components and algorithms to enhance the throughput and the latency for the MLPerf Inference benchmarks on the newest NVIDIA GPUs.Productionize inference systems with uncompromised software quality.Collaborate with researchers and engineers to productionize trending model architectures, inference techniques and quantization methods.Contribute to the design of APIs, abstractions, and UX that make it easier to scale model deployment while maintaining usability and flexibility.Participate in design discussions, code reviews, and technical planning to ensure the product aligns with the business goals.Stay up to date with the latest advancements and come up with novel research ideas in inference system-level optimization, then translate research ideas into practical, robust systems. Explorations and academic publications are encouraged.What we need to see:Bachelor’s, Master’s, or PhD degree in Computer Science/Engineering, Software Engineering, a related field, or equivalent experience.5+ years of experience in software development, preferably with Python and C++.Deep understanding of deep learning algorithms, distributed systems, parallel computing, and high-performance computing principles.Hands-on experience with ML frameworks (e.g., PyTorch) and inference engines (e.g., vLLM and SGLang).Experience optimizing compute, memory, and communication performance for the deployments of large models.Familiarity with GPU programming, CUDA, NCCL, and performance profiling tools.Ability to work closely with both research and engineering teams, translating pioneering research ideas into concrete designs and robust code, as well as coming up with novel research ideas.Excellent problem-solving skills, with the ability to debug sophisticated systems.A passion for building high-impact software that pushes the boundaries of what’s possible with large-scale AI.Ways to stand out from the crowd:Background with building and optimizing LLM inference engines such as vLLM and SGLang.Experience building ML compilers such as Triton, Torch Dynamo/Inductor.Experience working with cloud platforms (e.g., AWS, GCP, or Azure), containerization tools (e.g., Docker), and orchestration infrastructures (e.g., Kubernetes, Slurm).Exposure to DevOps practices, CI/CD pipelines, and infrastructure as code.Contributions to open-source projects (please provide a list of the GitHub PRs you submitted).At NVIDIA, we believe artificial intelligence (AI) will fundamentally transform how people live and work. Our mission is to advance AI research and development to create groundbreaking technologies that enable anyone to harness the power of AI and benefit from its potential. Our team consists of experts in AI, systems and performance optimization. Our leadership includes world-renowned experts in AI systems who have received multiple academic and industry research awards.If you've hacked the inner workings of PyTorch, or if you've written many CUDA/HIP kernels, or if you've developed and optimized inference services or training workloads, or if you've built and maintained large-scale Kubernetes clusters, or if you simply just enjoy solving hard problems, feel free to drop an application!#LI-HybridYour base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.You will also be eligible for equity and benefits.Applications for this job will be accepted at least until October 12, 2025.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Locations

  • Santa Clara, CA, US

Salary

Estimated Salary Rangemedium confidence

21,000,000 - 33,600,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Pythonintermediate
  • C++intermediate
  • MLPerf Inferenceintermediate
  • vLLMintermediate
  • NVIDIA GPUsintermediate
  • generative AI modelsintermediate
  • inference benchmarking methodologiesintermediate
  • profilingintermediate
  • debuggingintermediate
  • low-level system components optimizationintermediate
  • throughput optimizationintermediate
  • latency optimizationintermediate
  • productionizing inference systemsintermediate
  • quantization methodsintermediate
  • API designintermediate
  • abstractions designintermediate
  • UX designintermediate
  • code reviewsintermediate
  • technical planningintermediate
  • inference system-level optimizationintermediate
  • deep learning algorithmsintermediate
  • distributed systemsintermediate
  • parallel computingintermediate
  • high-performance computingintermediate
  • PyTorchintermediate
  • SGLangintermediate
  • compute optimizationintermediate
  • memory optimizationintermediate
  • communication optimizationintermediate

Target Your Resume for "Senior Software Engineer, AI Systems - vLLM and MLPerf" , NVIDIA

Get personalized recommendations to optimize your resume specifically for Senior Software Engineer, AI Systems - vLLM and MLPerf. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Senior Software Engineer, AI Systems - vLLM and MLPerf" , NVIDIA

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

United States of America

Answer 10 quick questions to check your fit for Senior Software Engineer, AI Systems - vLLM and MLPerf @ NVIDIA.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.