Resume and JobRESUME AND JOB
Apple logo

AIML - Staff ML Infrastructure Engineer, ML Platform & Technology - Pre-training Compute

Apple

Software and Technology Jobs

AIML - Staff ML Infrastructure Engineer, ML Platform & Technology - Pre-training Compute

full-timePosted: Oct 13, 2025

Job Description

Apple is where individual imaginations gather together, committing to the values that lead to great work. Every new product we build, service we create, or Apple Store experience we deliver is the result of us making each other’s ideas stronger. That happens because every one of us shares a belief that we can make something wonderful and share it with the world, changing lives for the better. It’s the diversity of our people and their thinking that inspires the innovation that runs through everything we do. When we bring everybody in, we can do the best work of our lives. Here, you’ll do more than join something — you’ll add something! As an engineer on ML Compute team, your work will include: - Drive large-scale pre-training initiatives to support cutting-edge foundation models, focusing on resiliency, efficiency, scalability, and resource optimization. - Enhance distributed training techniques for foundation models. - Research and implement new patterns and technologies to improve system performance, maintainability, and design. - Optimize execution and performance of workloads built with JAX, PyTorch, XLA and CUDA on large distributed systems. - Leverage high-performance networking technologies such as NCCL for GPU collectives and TPU interconnect (ICI/Fabric) for large-scale distributed training. - Architect a robust MLOps platform to streamline and automate pretraining operations. - Operationalize large-scale ML workloads on Kubernetes, ensuring distributed trainings are robust, efficient, and fault-tolerant. - Lead complex technical projects, defining requirements and tracking progress with team members. - Collaborate with cross-functional engineers to solve large-scale ML training challenges. - Mentor engineers in areas of your expertise, fostering skill growth and knowledge sharing. - Cultivate a team centered on collaboration, technical excellence, and innovation.

Locations

  • San Francisco Bay Area, California, United States

Salary

Estimated Salary Rangemedium confidence

50,000,000 - 120,000,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • large-scale pre-trainingintermediate
  • resiliencyintermediate
  • efficiencyintermediate
  • scalabilityintermediate
  • resource optimizationintermediate
  • distributed training techniquesintermediate
  • research and implement new patterns and technologiesintermediate
  • system performance optimizationintermediate
  • maintainabilityintermediate
  • designintermediate
  • optimize execution and performanceintermediate
  • JAXintermediate
  • PyTorchintermediate
  • XLAintermediate
  • CUDAintermediate
  • large distributed systemsintermediate
  • high-performance networking technologiesintermediate
  • NCCLintermediate
  • GPU collectivesintermediate
  • TPU interconnectintermediate
  • ICI/Fabricintermediate
  • MLOps platformintermediate
  • pretraining operationsintermediate
  • Kubernetesintermediate
  • fault-tolerant systemsintermediate
  • lead complex technical projectsintermediate
  • defining requirementsintermediate
  • tracking progressintermediate
  • collaborate with cross-functional engineersintermediate
  • solve large-scale ML training challengesintermediate
  • mentor engineersintermediate
  • fostering skill growthintermediate
  • knowledge sharingintermediate
  • collaborationintermediate
  • technical excellenceintermediate
  • innovationintermediate

Required Qualifications

  • Bachelors in Computer Science, engineering, or a related field (degree in computer science)
  • 6+ years of hands-on experience in building scalable backend systems for training and evaluation of machine learning models (experience, 6 years)
  • Proficient in relevant programming languages, like Python or Go (experience)
  • Strong expertise in distributed systems, reliability and scalability, containerization, and cloud platforms (experience)
  • Proficient in cloud computing infrastructure and tools: Kubernetes, Ray, PySpark (experience)
  • Ability to clearly and concisely communicate technical and architectural problems, while working with partners to iteratively find (experience)

Preferred Qualifications

  • Advance degrees in Computer Science, engineering, or a related field (degree in computer science)
  • Proficient in working with and debugging accelerators, like: GPU, TPU, AWS Trainium (experience)
  • Proficient in ML training and deployment frameworks, like: JAX, Tensorflow, PyTorch, TensorRT, vLLM (experience)

Responsibilities

  • As an engineer on ML Compute team, your work will include:
  • - Drive large-scale pre-training initiatives to support cutting-edge foundation models, focusing on resiliency, efficiency, scalability, and resource optimization.
  • - Enhance distributed training techniques for foundation models.
  • - Research and implement new patterns and technologies to improve system performance, maintainability, and design.
  • - Optimize execution and performance of workloads built with JAX, PyTorch, XLA and CUDA on large distributed systems.
  • - Leverage high-performance networking technologies such as NCCL for GPU collectives and TPU interconnect (ICI/Fabric) for large-scale distributed training.
  • - Architect a robust MLOps platform to streamline and automate pretraining operations.
  • - Operationalize large-scale ML workloads on Kubernetes, ensuring distributed trainings are robust, efficient, and fault-tolerant.
  • - Lead complex technical projects, defining requirements and tracking progress with team members.
  • - Collaborate with cross-functional engineers to solve large-scale ML training challenges.
  • - Mentor engineers in areas of your expertise, fostering skill growth and knowledge sharing.
  • - Cultivate a team centered on collaboration, technical excellence, and innovation.

Target Your Resume for "AIML - Staff ML Infrastructure Engineer, ML Platform & Technology - Pre-training Compute" , Apple

Get personalized recommendations to optimize your resume specifically for AIML - Staff ML Infrastructure Engineer, ML Platform & Technology - Pre-training Compute. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "AIML - Staff ML Infrastructure Engineer, ML Platform & Technology - Pre-training Compute" , Apple

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Hardware

Answer 10 quick questions to check your fit for AIML - Staff ML Infrastructure Engineer, ML Platform & Technology - Pre-training Compute @ Apple.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

Apple logo

AIML - Staff ML Infrastructure Engineer, ML Platform & Technology - Pre-training Compute

Apple

Software and Technology Jobs

AIML - Staff ML Infrastructure Engineer, ML Platform & Technology - Pre-training Compute

full-timePosted: Oct 13, 2025

Job Description

Apple is where individual imaginations gather together, committing to the values that lead to great work. Every new product we build, service we create, or Apple Store experience we deliver is the result of us making each other’s ideas stronger. That happens because every one of us shares a belief that we can make something wonderful and share it with the world, changing lives for the better. It’s the diversity of our people and their thinking that inspires the innovation that runs through everything we do. When we bring everybody in, we can do the best work of our lives. Here, you’ll do more than join something — you’ll add something! As an engineer on ML Compute team, your work will include: - Drive large-scale pre-training initiatives to support cutting-edge foundation models, focusing on resiliency, efficiency, scalability, and resource optimization. - Enhance distributed training techniques for foundation models. - Research and implement new patterns and technologies to improve system performance, maintainability, and design. - Optimize execution and performance of workloads built with JAX, PyTorch, XLA and CUDA on large distributed systems. - Leverage high-performance networking technologies such as NCCL for GPU collectives and TPU interconnect (ICI/Fabric) for large-scale distributed training. - Architect a robust MLOps platform to streamline and automate pretraining operations. - Operationalize large-scale ML workloads on Kubernetes, ensuring distributed trainings are robust, efficient, and fault-tolerant. - Lead complex technical projects, defining requirements and tracking progress with team members. - Collaborate with cross-functional engineers to solve large-scale ML training challenges. - Mentor engineers in areas of your expertise, fostering skill growth and knowledge sharing. - Cultivate a team centered on collaboration, technical excellence, and innovation.

Locations

  • San Francisco Bay Area, California, United States

Salary

Estimated Salary Rangemedium confidence

50,000,000 - 120,000,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • large-scale pre-trainingintermediate
  • resiliencyintermediate
  • efficiencyintermediate
  • scalabilityintermediate
  • resource optimizationintermediate
  • distributed training techniquesintermediate
  • research and implement new patterns and technologiesintermediate
  • system performance optimizationintermediate
  • maintainabilityintermediate
  • designintermediate
  • optimize execution and performanceintermediate
  • JAXintermediate
  • PyTorchintermediate
  • XLAintermediate
  • CUDAintermediate
  • large distributed systemsintermediate
  • high-performance networking technologiesintermediate
  • NCCLintermediate
  • GPU collectivesintermediate
  • TPU interconnectintermediate
  • ICI/Fabricintermediate
  • MLOps platformintermediate
  • pretraining operationsintermediate
  • Kubernetesintermediate
  • fault-tolerant systemsintermediate
  • lead complex technical projectsintermediate
  • defining requirementsintermediate
  • tracking progressintermediate
  • collaborate with cross-functional engineersintermediate
  • solve large-scale ML training challengesintermediate
  • mentor engineersintermediate
  • fostering skill growthintermediate
  • knowledge sharingintermediate
  • collaborationintermediate
  • technical excellenceintermediate
  • innovationintermediate

Required Qualifications

  • Bachelors in Computer Science, engineering, or a related field (degree in computer science)
  • 6+ years of hands-on experience in building scalable backend systems for training and evaluation of machine learning models (experience, 6 years)
  • Proficient in relevant programming languages, like Python or Go (experience)
  • Strong expertise in distributed systems, reliability and scalability, containerization, and cloud platforms (experience)
  • Proficient in cloud computing infrastructure and tools: Kubernetes, Ray, PySpark (experience)
  • Ability to clearly and concisely communicate technical and architectural problems, while working with partners to iteratively find (experience)

Preferred Qualifications

  • Advance degrees in Computer Science, engineering, or a related field (degree in computer science)
  • Proficient in working with and debugging accelerators, like: GPU, TPU, AWS Trainium (experience)
  • Proficient in ML training and deployment frameworks, like: JAX, Tensorflow, PyTorch, TensorRT, vLLM (experience)

Responsibilities

  • As an engineer on ML Compute team, your work will include:
  • - Drive large-scale pre-training initiatives to support cutting-edge foundation models, focusing on resiliency, efficiency, scalability, and resource optimization.
  • - Enhance distributed training techniques for foundation models.
  • - Research and implement new patterns and technologies to improve system performance, maintainability, and design.
  • - Optimize execution and performance of workloads built with JAX, PyTorch, XLA and CUDA on large distributed systems.
  • - Leverage high-performance networking technologies such as NCCL for GPU collectives and TPU interconnect (ICI/Fabric) for large-scale distributed training.
  • - Architect a robust MLOps platform to streamline and automate pretraining operations.
  • - Operationalize large-scale ML workloads on Kubernetes, ensuring distributed trainings are robust, efficient, and fault-tolerant.
  • - Lead complex technical projects, defining requirements and tracking progress with team members.
  • - Collaborate with cross-functional engineers to solve large-scale ML training challenges.
  • - Mentor engineers in areas of your expertise, fostering skill growth and knowledge sharing.
  • - Cultivate a team centered on collaboration, technical excellence, and innovation.

Target Your Resume for "AIML - Staff ML Infrastructure Engineer, ML Platform & Technology - Pre-training Compute" , Apple

Get personalized recommendations to optimize your resume specifically for AIML - Staff ML Infrastructure Engineer, ML Platform & Technology - Pre-training Compute. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "AIML - Staff ML Infrastructure Engineer, ML Platform & Technology - Pre-training Compute" , Apple

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Hardware

Answer 10 quick questions to check your fit for AIML - Staff ML Infrastructure Engineer, ML Platform & Technology - Pre-training Compute @ Apple.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.