Resume and JobRESUME AND JOB
NVIDIA logo

Senior GenAI Algorithms Engineer — Model Optimizations for Inference

NVIDIA

Software and Technology Jobs

Senior GenAI Algorithms Engineer — Model Optimizations for Inference

full-timePosted: Sep 22, 2025

Job Description

NVIDIA is at the forefront of the generative AI revolution! The Algorithmic Model Optimization Team specifically focuses on optimizing generative AI models such as large language models (LLM) and diffusion models for maximal inference efficiency using techniques ranging from quantization, speculative decoding, sparsity, distillation, pruning to neural architecture search, and streamlined deployment strategies with open-sourced inference frameworks. Seeking a Senior Deep Learning Algorithms Engineer to improve innovative generative AI models like LLMs, VLMs, multimodal and diffusion models. In this role, you will design, implement, and productionize model optimization algorithms for inference and deployment on NVIDIA’s latest hardware platforms. The focus is on ease of use, compute and memory efficiency, and achieving the best accuracy–performance tradeoffs through software–hardware co-design.Your work will span multiple layers of the AI software stack—ranging from algorithm design to integration—within NVIDIA’s ecosystem (TensorRT Model Optimizer, NeMo/Megatron, TensorRT-LLM) and open-source frameworks (PyTorch, Hugging Face, vLLM, SGLang). You may also dive deeper into GPU-level optimization, including custom kernel development with CUDA and Triton. This role offers a unique opportunity to work at the intersection of research and engineering, pushing the boundaries of large-scale AI optimization. We are looking for passionate engineers with strong foundations in both machine learning and software systems/architecture who are eager to make a broad impact across the AI stack.What you’ll be doing:Design and build modular, scalable model optimization software platforms that deliver exceptional user experiences while supporting diverse AI models and optimization techniques to drive widespread adoption.Explore, develop, and integrate innovative deep learning optimization algorithms (e.g., quantization, speculative decoding, sparsity) into NVIDIA's AI software stack, e.g., TensorRT Model Optimizer, NeMo/Megatron, and TensorRT-LLM.Deploy optimized models into leading OSS inference frameworks and contribute specialized APIs, model-level optimizations, and new features tailored to the latest NVIDIA hardware capabilities.Partner with NVIDIA teams to deliver model optimization solutions for customer use cases, ensuring optimal end-to-end workflows and balanced accuracy-performance trade-offs.Conduct deep GPU kernel-level profiling to identify and capitalize on hardware and software optimization opportunities (e.g., efficient attention kernels, KV cache optimization, parallelism strategies).Drive continuous innovation in deep learning inference performance to strengthen NVIDIA platform integration and expand market adoption across the AI inference ecosystem.What we need to see:Master’s, PhD, or equivalent experience in Computer Science, Artificial Intelligence, Applied Mathematics, or a related field.5+ years of relevant work or research experience in deep learning.Strong software design skills, including debugging, performance analysis, and test development.Proficiency in Python, PyTorch, and modern ML frameworks/tools.Proven foundation in algorithms and programming fundamentals.Strong written and verbal communication skills, with the ability to work both independently and collaboratively in a fast-paced environment.Ways to stand out from the crowd:Contributions to PyTorch, JAX, vLLM, SGLang, or other machine learning training and inference frameworks.Hands-on experience training or fine-tuning generative AI models on large-scale GPU clusters.Proficient in GPU architectures and compilation stacks, adept at analyzing and debugging end-to-end performance.Familiarity with NVIDIA’s deep learning SDKs (e.g., TensorRT).Experience developing high-performance GPU kernels for machine learning workloads using CUDA, CUTLASS, or Triton.Increasingly known as “the AI computing company” and widely considered to be one of the technology world’s most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. Are you creative, motivated, and love a challenge? If so, we want to hear from you! Come, join our model optimization group, where you can help build real-time, cost-effective computing platforms driving our success in this exciting and rapidly-growing field.Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 148,000 USD - 235,750 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.You will also be eligible for equity and benefits.Applications for this job will be accepted at least until September 26, 2025.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Locations

  • Santa Clara, CA, US

Salary

Estimated Salary Rangemedium confidence

21,000,000 - 42,000,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Generative AIintermediate
  • Large Language Models (LLM)intermediate
  • Diffusion Modelsintermediate
  • Quantizationintermediate
  • Speculative Decodingintermediate
  • Sparsityintermediate
  • Distillationintermediate
  • Pruningintermediate
  • Neural Architecture Searchintermediate
  • Model Optimizationintermediate
  • Inference Efficiencyintermediate
  • Deployment Strategiesintermediate
  • Vision-Language Models (VLM)intermediate
  • Multimodal Modelsintermediate
  • Deep Learning Algorithmsintermediate
  • Software-Hardware Co-Designintermediate
  • TensorRT Model Optimizerintermediate
  • NeMointermediate
  • Megatronintermediate
  • TensorRT-LLMintermediate
  • PyTorchintermediate
  • Hugging Faceintermediate
  • vLLMintermediate
  • SGLangintermediate
  • CUDAintermediate
  • Tritonintermediate
  • Custom Kernel Developmentintermediate
  • GPU Optimizationintermediate
  • Machine Learningintermediate
  • Software Systemsintermediate
  • Software Architectureintermediate
  • Algorithm Designintermediate

Target Your Resume for "Senior GenAI Algorithms Engineer — Model Optimizations for Inference" , NVIDIA

Get personalized recommendations to optimize your resume specifically for Senior GenAI Algorithms Engineer — Model Optimizations for Inference. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Senior GenAI Algorithms Engineer — Model Optimizations for Inference" , NVIDIA

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

United States of America

Answer 10 quick questions to check your fit for Senior GenAI Algorithms Engineer — Model Optimizations for Inference @ NVIDIA.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

NVIDIA logo

Senior GenAI Algorithms Engineer — Model Optimizations for Inference

NVIDIA

Software and Technology Jobs

Senior GenAI Algorithms Engineer — Model Optimizations for Inference

full-timePosted: Sep 22, 2025

Job Description

NVIDIA is at the forefront of the generative AI revolution! The Algorithmic Model Optimization Team specifically focuses on optimizing generative AI models such as large language models (LLM) and diffusion models for maximal inference efficiency using techniques ranging from quantization, speculative decoding, sparsity, distillation, pruning to neural architecture search, and streamlined deployment strategies with open-sourced inference frameworks. Seeking a Senior Deep Learning Algorithms Engineer to improve innovative generative AI models like LLMs, VLMs, multimodal and diffusion models. In this role, you will design, implement, and productionize model optimization algorithms for inference and deployment on NVIDIA’s latest hardware platforms. The focus is on ease of use, compute and memory efficiency, and achieving the best accuracy–performance tradeoffs through software–hardware co-design.Your work will span multiple layers of the AI software stack—ranging from algorithm design to integration—within NVIDIA’s ecosystem (TensorRT Model Optimizer, NeMo/Megatron, TensorRT-LLM) and open-source frameworks (PyTorch, Hugging Face, vLLM, SGLang). You may also dive deeper into GPU-level optimization, including custom kernel development with CUDA and Triton. This role offers a unique opportunity to work at the intersection of research and engineering, pushing the boundaries of large-scale AI optimization. We are looking for passionate engineers with strong foundations in both machine learning and software systems/architecture who are eager to make a broad impact across the AI stack.What you’ll be doing:Design and build modular, scalable model optimization software platforms that deliver exceptional user experiences while supporting diverse AI models and optimization techniques to drive widespread adoption.Explore, develop, and integrate innovative deep learning optimization algorithms (e.g., quantization, speculative decoding, sparsity) into NVIDIA's AI software stack, e.g., TensorRT Model Optimizer, NeMo/Megatron, and TensorRT-LLM.Deploy optimized models into leading OSS inference frameworks and contribute specialized APIs, model-level optimizations, and new features tailored to the latest NVIDIA hardware capabilities.Partner with NVIDIA teams to deliver model optimization solutions for customer use cases, ensuring optimal end-to-end workflows and balanced accuracy-performance trade-offs.Conduct deep GPU kernel-level profiling to identify and capitalize on hardware and software optimization opportunities (e.g., efficient attention kernels, KV cache optimization, parallelism strategies).Drive continuous innovation in deep learning inference performance to strengthen NVIDIA platform integration and expand market adoption across the AI inference ecosystem.What we need to see:Master’s, PhD, or equivalent experience in Computer Science, Artificial Intelligence, Applied Mathematics, or a related field.5+ years of relevant work or research experience in deep learning.Strong software design skills, including debugging, performance analysis, and test development.Proficiency in Python, PyTorch, and modern ML frameworks/tools.Proven foundation in algorithms and programming fundamentals.Strong written and verbal communication skills, with the ability to work both independently and collaboratively in a fast-paced environment.Ways to stand out from the crowd:Contributions to PyTorch, JAX, vLLM, SGLang, or other machine learning training and inference frameworks.Hands-on experience training or fine-tuning generative AI models on large-scale GPU clusters.Proficient in GPU architectures and compilation stacks, adept at analyzing and debugging end-to-end performance.Familiarity with NVIDIA’s deep learning SDKs (e.g., TensorRT).Experience developing high-performance GPU kernels for machine learning workloads using CUDA, CUTLASS, or Triton.Increasingly known as “the AI computing company” and widely considered to be one of the technology world’s most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. Are you creative, motivated, and love a challenge? If so, we want to hear from you! Come, join our model optimization group, where you can help build real-time, cost-effective computing platforms driving our success in this exciting and rapidly-growing field.Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 148,000 USD - 235,750 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.You will also be eligible for equity and benefits.Applications for this job will be accepted at least until September 26, 2025.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Locations

  • Santa Clara, CA, US

Salary

Estimated Salary Rangemedium confidence

21,000,000 - 42,000,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Generative AIintermediate
  • Large Language Models (LLM)intermediate
  • Diffusion Modelsintermediate
  • Quantizationintermediate
  • Speculative Decodingintermediate
  • Sparsityintermediate
  • Distillationintermediate
  • Pruningintermediate
  • Neural Architecture Searchintermediate
  • Model Optimizationintermediate
  • Inference Efficiencyintermediate
  • Deployment Strategiesintermediate
  • Vision-Language Models (VLM)intermediate
  • Multimodal Modelsintermediate
  • Deep Learning Algorithmsintermediate
  • Software-Hardware Co-Designintermediate
  • TensorRT Model Optimizerintermediate
  • NeMointermediate
  • Megatronintermediate
  • TensorRT-LLMintermediate
  • PyTorchintermediate
  • Hugging Faceintermediate
  • vLLMintermediate
  • SGLangintermediate
  • CUDAintermediate
  • Tritonintermediate
  • Custom Kernel Developmentintermediate
  • GPU Optimizationintermediate
  • Machine Learningintermediate
  • Software Systemsintermediate
  • Software Architectureintermediate
  • Algorithm Designintermediate

Target Your Resume for "Senior GenAI Algorithms Engineer — Model Optimizations for Inference" , NVIDIA

Get personalized recommendations to optimize your resume specifically for Senior GenAI Algorithms Engineer — Model Optimizations for Inference. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Senior GenAI Algorithms Engineer — Model Optimizations for Inference" , NVIDIA

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

United States of America

Answer 10 quick questions to check your fit for Senior GenAI Algorithms Engineer — Model Optimizations for Inference @ NVIDIA.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.