Resume and JobRESUME AND JOB
Scale AI logo

AI Infrastructure Engineer, Core Infrastructure

Scale AI

Software and Technology Jobs

AI Infrastructure Engineer, Core Infrastructure

full-timePosted: Dec 2, 2025

Job Description

As a Software Engineer on the ML Infrastructure team, you will design and build the next generation of foundational systems that power all ML Infrastructure compute at Scale - from model training and evaluation to large-scale inference and experimentation.

Our platform is responsible for orchestrating workloads across heterogeneous compute environments (GPU, CPU, on-prem, and cloud), optimizing for reliability, cost efficiency, and developer velocity.

The ideal candidate has a strong background in distributed systems, scheduling, and platform architecture, and is excited by the challenge of building internal infrastructure used across all ML teams.

You will:

  • Design and maintain fault-tolerant, cost-efficient systems that manage compute allocation, scheduling, and autoscaling across clusters and clouds.
  • Build common abstractions and APIs that unify job submission, telemetry, and observability across serving and training workloads.
  • Develop systems for usage metering, cost attribution, and quota management, enabling transparency and control over compute budgets.
  • Improve reliability and efficiency of large-scale GPU workloads through better scheduling, bin-packing, preemption, and resource sharing.
  • Partner with ML engineers and API teams to identify bottlenecks and define long-term architectural standards.
  • Lead projects end-to-end — from requirements gathering and design to rollout and monitoring — in a cross-functional environment.

Ideally you'd have:

  • 4+ years of experience building large-scale backend or distributed systems.
  • Strong programming skills in Python, Go, or Rust, and familiarity with modern cloud-native architecture.
  • Experience with containers and orchestration tools (Kubernetes, Docker) and Infrastructure as Code (Terraform).
  • Familiarity with schedulers or workload management systems (e.g., Kubernetes controllers, Slurm, Ray, internal job queues).
  • Understanding of observability and reliability practices (metrics, tracing, alerting, SLOs).
  • A track record of improving system efficiency, reliability, or developer velocity in production environments.

Nice to haves:

  • Experience with multi-tenant compute platforms or internal PaaS.
  • Knowledge of GPU scheduling, cost modeling, or hybrid cloud orchestration.
  • Familiarity with LLM or ML training workloads, though deep ML expertise is not required.

Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.

Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:
$179,400$310,500 USD

PLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants.

About Us:

At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications.

We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status. 

We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information.

We comply with the United States Department of Labor's Pay Transparency provision

PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.

Locations

  • San Francisco, CA,
  • Seattle, WA,
  • New York, NY,

Salary

Salary details available upon request

Estimated Salary Rangemedium confidence

220,000 - 450,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • distributed systemsintermediate
  • schedulingintermediate
  • platform architectureintermediate
  • Pythonintermediate
  • Gointermediate
  • Rustintermediate
  • cloud-native architectureintermediate
  • Kubernetesintermediate
  • Dockerintermediate
  • Terraformintermediate
  • schedulers (Kubernetes controllers, Slurm, Ray)intermediate
  • observability (metrics, tracing, alerting, SLOs)intermediate
  • fault-tolerant systemsintermediate
  • cost-efficient systemsintermediate
  • compute allocationintermediate
  • autoscalingintermediate
  • APIsintermediate
  • telemetryintermediate
  • usage meteringintermediate
  • cost attributionintermediate
  • quota managementintermediate
  • GPU workloadsintermediate
  • bin-packingintermediate
  • preemptionintermediate
  • resource sharingintermediate

Required Qualifications

  • 4+ years of experience building large-scale backend or distributed systems (experience)
  • Strong programming skills in Python, Go, or Rust, and familiarity with modern cloud-native architecture (experience)
  • Experience with containers and orchestration tools (Kubernetes, Docker) and Infrastructure as Code (Terraform) (experience)
  • Familiarity with schedulers or workload management systems (e.g., Kubernetes controllers, Slurm, Ray, internal job queues) (experience)
  • Understanding of observability and reliability practices (metrics, tracing, alerting, SLOs) (experience)
  • A track record of improving system efficiency, reliability, or developer velocity in production environments (experience)

Preferred Qualifications

  • Experience with multi-tenant compute platforms or internal PaaS (experience)
  • Knowledge of GPU scheduling, cost modeling, or hybrid cloud orchestration (experience)
  • Familiarity with LLM or ML training workloads, though deep ML expertise is not required (experience)

Responsibilities

  • Design and maintain fault-tolerant, cost-efficient systems that manage compute allocation, scheduling, and autoscaling across clusters and clouds
  • Build common abstractions and APIs that unify job submission, telemetry, and observability across serving and training workloads
  • Develop systems for usage metering, cost attribution, and quota management, enabling transparency and control over compute budgets
  • Improve reliability and efficiency of large-scale GPU workloads through better scheduling, bin-packing, preemption, and resource sharing
  • Partner with ML engineers and API teams to identify bottlenecks and define long-term architectural standards
  • Lead projects end-to-end — from requirements gathering and design to rollout and monitoring — in a cross-functional environment

Benefits

  • general: Comprehensive health, dental and vision coverage
  • general: retirement benefits
  • general: a learning and development stipend
  • general: generous PTO
  • general: a commuter stipend

Target Your Resume for "AI Infrastructure Engineer, Core Infrastructure" , Scale AI

Get personalized recommendations to optimize your resume specifically for AI Infrastructure Engineer, Core Infrastructure. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "AI Infrastructure Engineer, Core Infrastructure" , Scale AI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

ResearchResearch

Answer 10 quick questions to check your fit for AI Infrastructure Engineer, Core Infrastructure @ Scale AI.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

Scale AI logo

AI Infrastructure Engineer, Core Infrastructure

Scale AI

Software and Technology Jobs

AI Infrastructure Engineer, Core Infrastructure

full-timePosted: Dec 2, 2025

Job Description

As a Software Engineer on the ML Infrastructure team, you will design and build the next generation of foundational systems that power all ML Infrastructure compute at Scale - from model training and evaluation to large-scale inference and experimentation.

Our platform is responsible for orchestrating workloads across heterogeneous compute environments (GPU, CPU, on-prem, and cloud), optimizing for reliability, cost efficiency, and developer velocity.

The ideal candidate has a strong background in distributed systems, scheduling, and platform architecture, and is excited by the challenge of building internal infrastructure used across all ML teams.

You will:

  • Design and maintain fault-tolerant, cost-efficient systems that manage compute allocation, scheduling, and autoscaling across clusters and clouds.
  • Build common abstractions and APIs that unify job submission, telemetry, and observability across serving and training workloads.
  • Develop systems for usage metering, cost attribution, and quota management, enabling transparency and control over compute budgets.
  • Improve reliability and efficiency of large-scale GPU workloads through better scheduling, bin-packing, preemption, and resource sharing.
  • Partner with ML engineers and API teams to identify bottlenecks and define long-term architectural standards.
  • Lead projects end-to-end — from requirements gathering and design to rollout and monitoring — in a cross-functional environment.

Ideally you'd have:

  • 4+ years of experience building large-scale backend or distributed systems.
  • Strong programming skills in Python, Go, or Rust, and familiarity with modern cloud-native architecture.
  • Experience with containers and orchestration tools (Kubernetes, Docker) and Infrastructure as Code (Terraform).
  • Familiarity with schedulers or workload management systems (e.g., Kubernetes controllers, Slurm, Ray, internal job queues).
  • Understanding of observability and reliability practices (metrics, tracing, alerting, SLOs).
  • A track record of improving system efficiency, reliability, or developer velocity in production environments.

Nice to haves:

  • Experience with multi-tenant compute platforms or internal PaaS.
  • Knowledge of GPU scheduling, cost modeling, or hybrid cloud orchestration.
  • Familiarity with LLM or ML training workloads, though deep ML expertise is not required.

Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.

Please reference the job posting's subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:
$179,400$310,500 USD

PLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants.

About Us:

At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications.

We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status. 

We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information.

We comply with the United States Department of Labor's Pay Transparency provision

PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.

Locations

  • San Francisco, CA,
  • Seattle, WA,
  • New York, NY,

Salary

Salary details available upon request

Estimated Salary Rangemedium confidence

220,000 - 450,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • distributed systemsintermediate
  • schedulingintermediate
  • platform architectureintermediate
  • Pythonintermediate
  • Gointermediate
  • Rustintermediate
  • cloud-native architectureintermediate
  • Kubernetesintermediate
  • Dockerintermediate
  • Terraformintermediate
  • schedulers (Kubernetes controllers, Slurm, Ray)intermediate
  • observability (metrics, tracing, alerting, SLOs)intermediate
  • fault-tolerant systemsintermediate
  • cost-efficient systemsintermediate
  • compute allocationintermediate
  • autoscalingintermediate
  • APIsintermediate
  • telemetryintermediate
  • usage meteringintermediate
  • cost attributionintermediate
  • quota managementintermediate
  • GPU workloadsintermediate
  • bin-packingintermediate
  • preemptionintermediate
  • resource sharingintermediate

Required Qualifications

  • 4+ years of experience building large-scale backend or distributed systems (experience)
  • Strong programming skills in Python, Go, or Rust, and familiarity with modern cloud-native architecture (experience)
  • Experience with containers and orchestration tools (Kubernetes, Docker) and Infrastructure as Code (Terraform) (experience)
  • Familiarity with schedulers or workload management systems (e.g., Kubernetes controllers, Slurm, Ray, internal job queues) (experience)
  • Understanding of observability and reliability practices (metrics, tracing, alerting, SLOs) (experience)
  • A track record of improving system efficiency, reliability, or developer velocity in production environments (experience)

Preferred Qualifications

  • Experience with multi-tenant compute platforms or internal PaaS (experience)
  • Knowledge of GPU scheduling, cost modeling, or hybrid cloud orchestration (experience)
  • Familiarity with LLM or ML training workloads, though deep ML expertise is not required (experience)

Responsibilities

  • Design and maintain fault-tolerant, cost-efficient systems that manage compute allocation, scheduling, and autoscaling across clusters and clouds
  • Build common abstractions and APIs that unify job submission, telemetry, and observability across serving and training workloads
  • Develop systems for usage metering, cost attribution, and quota management, enabling transparency and control over compute budgets
  • Improve reliability and efficiency of large-scale GPU workloads through better scheduling, bin-packing, preemption, and resource sharing
  • Partner with ML engineers and API teams to identify bottlenecks and define long-term architectural standards
  • Lead projects end-to-end — from requirements gathering and design to rollout and monitoring — in a cross-functional environment

Benefits

  • general: Comprehensive health, dental and vision coverage
  • general: retirement benefits
  • general: a learning and development stipend
  • general: generous PTO
  • general: a commuter stipend

Target Your Resume for "AI Infrastructure Engineer, Core Infrastructure" , Scale AI

Get personalized recommendations to optimize your resume specifically for AI Infrastructure Engineer, Core Infrastructure. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "AI Infrastructure Engineer, Core Infrastructure" , Scale AI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

ResearchResearch

Answer 10 quick questions to check your fit for AI Infrastructure Engineer, Core Infrastructure @ Scale AI.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.