Software engineer -AI/ML, AWS Neuron Inference, AWS Neuron Inference

Amazon logo

Amazon

full-time

Posted: September 8, 2025

Number of Vacancies: 1

Job Description

AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machinelearning accelerators. This role is for a senior software engineer in the Machine Learning Inference Applications team. This role is responsible for development and performance optimization of core building blocks of LLM Inference - Attention, MLP, Quantization, Speculative Decoding, Mixture of Experts, etc.The team works side by side with chip architects, compiler engineers and runtime engineers to deliver performance and accuracy on Neuron devices across a range of models such as Llama 3.3 70B, 3.1 405B, DBRX, Mixtral, and so on.Key job responsibilitiesResponsibilities of this role include adapting latest research in LLM optimization to Neuron chips to extract best performance from both open source as well as internally developed models. Working across teams and organizations is key.About the teamOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.

Locations

  • United States, WA, Seattle, Seattle, WA, United States

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

180,000 - 300,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • - 3+ years of non-internship professional software development experienceintermediate
  • - 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experienceintermediate
  • - Programming proficiency in Python or C++ (at least one required)intermediate
  • - Experience with PyTorchintermediate
  • - Working knowledge of Machine Learning and LLM fundamentals including transformer architecture, training/inference lifecycles, and optimization techniquesintermediate

Required Qualifications

  • - 3+ years of non-internship professional software development experience (experience, 3 years)
  • - 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience (experience, 2 years)
  • - Programming proficiency in Python or C++ (at least one required) (experience)
  • - Experience with PyTorch (experience)
  • - Working knowledge of Machine Learning and LLM fundamentals including transformer architecture, training/inference lifecycles, and optimization techniques (experience)
  • - Strong understanding of system performance, memory management, and parallel computing principles (experience)

Preferred Qualifications

  • - Experience with JAX (experience)
  • - Experience with debugging, profiling, and implementing software engineering best practices in large-scale systems (experience)
  • - Expertise with PyTorch, JIT compilation, and AOT tracing (experience)
  • - Experience with CUDA kernels or equivalent ML/low-level kernels (experience)
  • - Experience with performant kernel development (e.g., CUTLASS, FlashInfer) (experience)
  • - Experience with inference serving platforms (vLLM, SGLang, TensorRT) in production environments (experience)
  • - Deep understanding of computer architecture, operating systems, and parallel computing (experience)
  • Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $129,300/year in our lowest geographic market up to $223,600/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site. (experience)

Responsibilities

  • Responsibilities of this role include adapting latest research in LLM optimization to Neuron chips to extract best performance from both open source as well as internally developed models. Working across teams and organizations is key.
  • About the team
  • Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.

Target Your Resume for "Software engineer -AI/ML, AWS Neuron Inference, AWS Neuron Inference"

Get personalized recommendations to optimize your resume specifically for Software engineer -AI/ML, AWS Neuron Inference, AWS Neuron Inference. Our AI analyzes job requirements and tailors your resume to maximize your chances.

Keyword optimization
Skills matching
Experience alignment

Check Your ATS Score for "Software engineer -AI/ML, AWS Neuron Inference, AWS Neuron Inference"

Find out how well your resume matches this job's requirements. Our Applicant Tracking System (ATS) analyzer scores your resume based on keywords, skills, and format compatibility.

Instant analysis
Detailed feedback
Improvement tips

Documents

Tags & Categories

aws.team-annapurna-labsaws.team-utility-computingSoftware Development

Software engineer -AI/ML, AWS Neuron Inference, AWS Neuron Inference

Amazon logo

Amazon

full-time

Posted: September 8, 2025

Number of Vacancies: 1

Job Description

AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machinelearning accelerators. This role is for a senior software engineer in the Machine Learning Inference Applications team. This role is responsible for development and performance optimization of core building blocks of LLM Inference - Attention, MLP, Quantization, Speculative Decoding, Mixture of Experts, etc.The team works side by side with chip architects, compiler engineers and runtime engineers to deliver performance and accuracy on Neuron devices across a range of models such as Llama 3.3 70B, 3.1 405B, DBRX, Mixtral, and so on.Key job responsibilitiesResponsibilities of this role include adapting latest research in LLM optimization to Neuron chips to extract best performance from both open source as well as internally developed models. Working across teams and organizations is key.About the teamOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.

Locations

  • United States, WA, Seattle, Seattle, WA, United States

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

180,000 - 300,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • - 3+ years of non-internship professional software development experienceintermediate
  • - 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experienceintermediate
  • - Programming proficiency in Python or C++ (at least one required)intermediate
  • - Experience with PyTorchintermediate
  • - Working knowledge of Machine Learning and LLM fundamentals including transformer architecture, training/inference lifecycles, and optimization techniquesintermediate

Required Qualifications

  • - 3+ years of non-internship professional software development experience (experience, 3 years)
  • - 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience (experience, 2 years)
  • - Programming proficiency in Python or C++ (at least one required) (experience)
  • - Experience with PyTorch (experience)
  • - Working knowledge of Machine Learning and LLM fundamentals including transformer architecture, training/inference lifecycles, and optimization techniques (experience)
  • - Strong understanding of system performance, memory management, and parallel computing principles (experience)

Preferred Qualifications

  • - Experience with JAX (experience)
  • - Experience with debugging, profiling, and implementing software engineering best practices in large-scale systems (experience)
  • - Expertise with PyTorch, JIT compilation, and AOT tracing (experience)
  • - Experience with CUDA kernels or equivalent ML/low-level kernels (experience)
  • - Experience with performant kernel development (e.g., CUTLASS, FlashInfer) (experience)
  • - Experience with inference serving platforms (vLLM, SGLang, TensorRT) in production environments (experience)
  • - Deep understanding of computer architecture, operating systems, and parallel computing (experience)
  • Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $129,300/year in our lowest geographic market up to $223,600/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site. (experience)

Responsibilities

  • Responsibilities of this role include adapting latest research in LLM optimization to Neuron chips to extract best performance from both open source as well as internally developed models. Working across teams and organizations is key.
  • About the team
  • Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.

Target Your Resume for "Software engineer -AI/ML, AWS Neuron Inference, AWS Neuron Inference"

Get personalized recommendations to optimize your resume specifically for Software engineer -AI/ML, AWS Neuron Inference, AWS Neuron Inference. Our AI analyzes job requirements and tailors your resume to maximize your chances.

Keyword optimization
Skills matching
Experience alignment

Check Your ATS Score for "Software engineer -AI/ML, AWS Neuron Inference, AWS Neuron Inference"

Find out how well your resume matches this job's requirements. Our Applicant Tracking System (ATS) analyzer scores your resume based on keywords, skills, and format compatibility.

Instant analysis
Detailed feedback
Improvement tips

Documents

Tags & Categories

aws.team-annapurna-labsaws.team-utility-computingSoftware Development