Resume and JobRESUME AND JOB
Scale AI logo

AI Infrastructure Engineer, Model Serving Platform

Scale AI

Software and Technology Jobs

AI Infrastructure Engineer, Model Serving Platform

full-timePosted: Nov 18, 2025

Job Description

As a Software Engineer on the ML Infrastructure team, you will design and build platforms for scalable, reliable, and efficient serving of LLMs. Our platform powers cutting-edge research and production systems, supporting both internal and external use cases across various environments.

The ideal candidate combines strong ML fundamentals with deep expertise in backend system design. You’ll work in a highly collaborative environment, bridging research and engineering to deliver seamless experiences to our customers and accelerate innovation across the company.

You will:

  • Build and maintain fault-tolerant, high-performance systems for serving LLMs workloads at scale.
  • Build an internal platform to empower LLM capability discovery.
  • Collaborate with researchers and engineers to integrate and optimize models for production and research use cases.
  • Conduct architecture and design reviews to uphold best practices in system design and scalability.
  • Develop monitoring and observability solutions to ensure system health and performance.
  • Lead projects end-to-end, from requirements gathering to implementation, in a cross-functional environment. 

Ideally you'd have:

  • 4+ years of experience building large-scale, high-performance backend systems.
  • Strong programming skills in one or more languages (e.g., Python, Go, Rust, C++).
  • Experience with LLM serving and routing fundamentals (e.g. rate limiting, token streaming, load balancing, budgets, etc.)
  • Experience with LLM capabilities and concepts such as reasoning, tool calling, prompt templates, etc.
  • Experience with containers and orchestration tools (e.g., Docker, Kubernetes).
  • Familiarity with cloud infrastructure (AWS, GCP) and infrastructure as code (e.g., Terraform).
  • Proven ability to solve complex problems and work independently in fast-moving environments.

Nice to haves:

  • Experience with modern LLM serving frameworks such as vLLM, SGLang, TensorRT-LLM, or text-generation-inference.

Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.

Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:
$179,400$224,250 USD

PLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants.

About Us:

At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications.

We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status. 

We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information.

We comply with the United States Department of Labor's Pay Transparency provision

PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.

Locations

  • San Francisco, CA,
  • New York, NY,

Salary

Salary details available upon request

Estimated Salary Rangemedium confidence

220,000 - 450,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • ML fundamentalsintermediate
  • backend system designintermediate
  • Python, Go, Rust, C++intermediate
  • LLM serving and routing fundamentals (rate limiting, token streaming, load balancing, budgets)intermediate
  • LLM capabilities (reasoning, tool calling, prompt templates)intermediate
  • Docker, Kubernetesintermediate
  • AWS, GCPintermediate
  • Terraformintermediate
  • vLLM, SGLang, TensorRT-LLM, text-generation-inferenceintermediate

Required Qualifications

  • 4+ years of experience building large-scale, high-performance backend systems (experience)
  • Strong programming skills in one or more languages (e.g., Python, Go, Rust, C++) (experience)
  • Experience with LLM serving and routing fundamentals (e.g. rate limiting, token streaming, load balancing, budgets, etc.) (experience)
  • Experience with LLM capabilities and concepts such as reasoning, tool calling, prompt templates, etc. (experience)
  • Experience with containers and orchestration tools (e.g., Docker, Kubernetes) (experience)
  • Familiarity with cloud infrastructure (AWS, GCP) and infrastructure as code (e.g., Terraform) (experience)
  • Proven ability to solve complex problems and work independently in fast-moving environments (experience)

Preferred Qualifications

  • Experience with modern LLM serving frameworks such as vLLM, SGLang, TensorRT-LLM, or text-generation-inference (experience)

Responsibilities

  • Build and maintain fault-tolerant, high-performance systems for serving LLMs workloads at scale
  • Build an internal platform to empower LLM capability discovery
  • Collaborate with researchers and engineers to integrate and optimize models for production and research use cases
  • Conduct architecture and design reviews to uphold best practices in system design and scalability
  • Develop monitoring and observability solutions to ensure system health and performance
  • Lead projects end-to-end, from requirements gathering to implementation, in a cross-functional environment

Benefits

  • general: Comprehensive health, dental and vision coverage
  • general: retirement benefits
  • general: a learning and development stipend
  • general: generous PTO
  • general: a commuter stipend

Target Your Resume for "AI Infrastructure Engineer, Model Serving Platform" , Scale AI

Get personalized recommendations to optimize your resume specifically for AI Infrastructure Engineer, Model Serving Platform. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "AI Infrastructure Engineer, Model Serving Platform" , Scale AI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

ResearchResearch

Answer 10 quick questions to check your fit for AI Infrastructure Engineer, Model Serving Platform @ Scale AI.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

Scale AI logo

AI Infrastructure Engineer, Model Serving Platform

Scale AI

Software and Technology Jobs

AI Infrastructure Engineer, Model Serving Platform

full-timePosted: Nov 18, 2025

Job Description

As a Software Engineer on the ML Infrastructure team, you will design and build platforms for scalable, reliable, and efficient serving of LLMs. Our platform powers cutting-edge research and production systems, supporting both internal and external use cases across various environments.

The ideal candidate combines strong ML fundamentals with deep expertise in backend system design. You’ll work in a highly collaborative environment, bridging research and engineering to deliver seamless experiences to our customers and accelerate innovation across the company.

You will:

  • Build and maintain fault-tolerant, high-performance systems for serving LLMs workloads at scale.
  • Build an internal platform to empower LLM capability discovery.
  • Collaborate with researchers and engineers to integrate and optimize models for production and research use cases.
  • Conduct architecture and design reviews to uphold best practices in system design and scalability.
  • Develop monitoring and observability solutions to ensure system health and performance.
  • Lead projects end-to-end, from requirements gathering to implementation, in a cross-functional environment. 

Ideally you'd have:

  • 4+ years of experience building large-scale, high-performance backend systems.
  • Strong programming skills in one or more languages (e.g., Python, Go, Rust, C++).
  • Experience with LLM serving and routing fundamentals (e.g. rate limiting, token streaming, load balancing, budgets, etc.)
  • Experience with LLM capabilities and concepts such as reasoning, tool calling, prompt templates, etc.
  • Experience with containers and orchestration tools (e.g., Docker, Kubernetes).
  • Familiarity with cloud infrastructure (AWS, GCP) and infrastructure as code (e.g., Terraform).
  • Proven ability to solve complex problems and work independently in fast-moving environments.

Nice to haves:

  • Experience with modern LLM serving frameworks such as vLLM, SGLang, TensorRT-LLM, or text-generation-inference.

Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.

Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:
$179,400$224,250 USD

PLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants.

About Us:

At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications.

We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status. 

We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information.

We comply with the United States Department of Labor's Pay Transparency provision

PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.

Locations

  • San Francisco, CA,
  • New York, NY,

Salary

Salary details available upon request

Estimated Salary Rangemedium confidence

220,000 - 450,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • ML fundamentalsintermediate
  • backend system designintermediate
  • Python, Go, Rust, C++intermediate
  • LLM serving and routing fundamentals (rate limiting, token streaming, load balancing, budgets)intermediate
  • LLM capabilities (reasoning, tool calling, prompt templates)intermediate
  • Docker, Kubernetesintermediate
  • AWS, GCPintermediate
  • Terraformintermediate
  • vLLM, SGLang, TensorRT-LLM, text-generation-inferenceintermediate

Required Qualifications

  • 4+ years of experience building large-scale, high-performance backend systems (experience)
  • Strong programming skills in one or more languages (e.g., Python, Go, Rust, C++) (experience)
  • Experience with LLM serving and routing fundamentals (e.g. rate limiting, token streaming, load balancing, budgets, etc.) (experience)
  • Experience with LLM capabilities and concepts such as reasoning, tool calling, prompt templates, etc. (experience)
  • Experience with containers and orchestration tools (e.g., Docker, Kubernetes) (experience)
  • Familiarity with cloud infrastructure (AWS, GCP) and infrastructure as code (e.g., Terraform) (experience)
  • Proven ability to solve complex problems and work independently in fast-moving environments (experience)

Preferred Qualifications

  • Experience with modern LLM serving frameworks such as vLLM, SGLang, TensorRT-LLM, or text-generation-inference (experience)

Responsibilities

  • Build and maintain fault-tolerant, high-performance systems for serving LLMs workloads at scale
  • Build an internal platform to empower LLM capability discovery
  • Collaborate with researchers and engineers to integrate and optimize models for production and research use cases
  • Conduct architecture and design reviews to uphold best practices in system design and scalability
  • Develop monitoring and observability solutions to ensure system health and performance
  • Lead projects end-to-end, from requirements gathering to implementation, in a cross-functional environment

Benefits

  • general: Comprehensive health, dental and vision coverage
  • general: retirement benefits
  • general: a learning and development stipend
  • general: generous PTO
  • general: a commuter stipend

Target Your Resume for "AI Infrastructure Engineer, Model Serving Platform" , Scale AI

Get personalized recommendations to optimize your resume specifically for AI Infrastructure Engineer, Model Serving Platform. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "AI Infrastructure Engineer, Model Serving Platform" , Scale AI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

ResearchResearch

Answer 10 quick questions to check your fit for AI Infrastructure Engineer, Model Serving Platform @ Scale AI.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.