Resume and JobRESUME AND JOB
xAI logo

Member of Technical Staff, Inference

xAI

Member of Technical Staff, Inference

full-timePosted: Dec 29, 2025

Job Description

About xAI

xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.

Tech Stack

  • Python / Rust
  • PyTorch / JAX
  • CUDA / CUTLASS / Triton / NCCL
  • Kubernetes
  • SGLang: This team is actively working on one of the most popular open-source inference engines, SGLang. You have the opportunity to work on open-source projects. 

Location

The role is based in the Bay Area [San Francisco and Palo Alto]. Candidates are expected to be located near the Bay Area or open to relocation.

Focus

  • Optimizing the latency and throughput of model inference.
  • Building reliable and performant production serving systems to serve billions of users.
  • Accelerating research on scaling test-time compute and rollout in reinforcement learning training.
  • Model-hardware co-design for next-generation architectures

Ideal Experiences

  • Worked on system optimizations for model serving, such as batching, caching, load balancing, and parallelism.
  • Worked on low-level optimizations for inference, such as GPU kernels and code generation.
  • Worked on algorithmic optimizations for inference, such as quantization, distillation, and speculative decoding, and low-precision numerics.
  • Worked on large-scale inference engines or reinforcement learning frameworks.
  • Worked on large-scale, high-concurrent production serving.
  • Worked on testing, benchmarking, and reliability of inference services.

Interview Process

After submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15-minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews:

  1. Coding assessment in a language of your choice.
  2. Systems hands-on: Demonstrate practical skills in a live problem-solving session.
  3. Project deep-dive: Present your past exceptional work to a small audience.
  4. Meet and greet with the wider team.

Our goal is to finish the main process within one week. All interviews will be conducted via Google Meet.

Annual Salary Range

$180,000 - $440,000 USD

Benefits

Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short & long-term disability insurance, life insurance, and various other discounts and perks.

xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.

Locations

  • Palo Alto, CA,
  • San Francisco, CA,

Salary

180,000 - 440,000 USD / yearly

Skills Required

  • Pythonintermediate
  • Rustintermediate
  • PyTorchintermediate
  • JAXintermediate
  • CUDAintermediate
  • CUTLASSintermediate
  • Tritonintermediate
  • NCCLintermediate
  • Kubernetesintermediate
  • SGLangintermediate
  • system optimizations for model serving (batching, caching, load balancing, parallelism)intermediate
  • low-level optimizations for inference (GPU kernels, code generation)intermediate
  • algorithmic optimizations for inference (quantization, distillation, speculative decoding, low-precision numerics)intermediate
  • strong communication skillsintermediate
  • work ethicintermediate
  • strong prioritization skillsintermediate

Required Qualifications

  • Worked on system optimizations for model serving, such as batching, caching, load balancing, and parallelism. (experience)
  • Worked on low-level optimizations for inference, such as GPU kernels and code generation. (experience)
  • Worked on algorithmic optimizations for inference, such as quantization, distillation, and speculative decoding, and low-precision numerics. (experience)
  • Worked on large-scale inference engines or reinforcement learning frameworks. (experience)
  • Worked on large-scale, high-concurrent production serving. (experience)
  • Worked on testing, benchmarking, and reliability of inference services. (experience)

Responsibilities

  • Optimizing the latency and throughput of model inference.
  • Building reliable and performant production serving systems to serve billions of users.
  • Accelerating research on scaling test-time compute and rollout in reinforcement learning training.
  • Model-hardware co-design for next-generation architectures.

Benefits

  • general: equity
  • general: comprehensive medical, vision, and dental coverage
  • general: access to a 401(k) retirement plan
  • general: short & long-term disability insurance
  • general: life insurance
  • general: various other discounts and perks

Target Your Resume for "Member of Technical Staff, Inference" , xAI

Get personalized recommendations to optimize your resume specifically for Member of Technical Staff, Inference. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Member of Technical Staff, Inference" , xAI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Foundation ModelFoundation Model
Quiz Challenge

Answer 10 quick questions to check your fit for Member of Technical Staff, Inference @ xAI.

10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

xAI logo

Member of Technical Staff, Inference

xAI

Member of Technical Staff, Inference

full-timePosted: Dec 29, 2025

Job Description

About xAI

xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.

Tech Stack

  • Python / Rust
  • PyTorch / JAX
  • CUDA / CUTLASS / Triton / NCCL
  • Kubernetes
  • SGLang: This team is actively working on one of the most popular open-source inference engines, SGLang. You have the opportunity to work on open-source projects. 

Location

The role is based in the Bay Area [San Francisco and Palo Alto]. Candidates are expected to be located near the Bay Area or open to relocation.

Focus

  • Optimizing the latency and throughput of model inference.
  • Building reliable and performant production serving systems to serve billions of users.
  • Accelerating research on scaling test-time compute and rollout in reinforcement learning training.
  • Model-hardware co-design for next-generation architectures

Ideal Experiences

  • Worked on system optimizations for model serving, such as batching, caching, load balancing, and parallelism.
  • Worked on low-level optimizations for inference, such as GPU kernels and code generation.
  • Worked on algorithmic optimizations for inference, such as quantization, distillation, and speculative decoding, and low-precision numerics.
  • Worked on large-scale inference engines or reinforcement learning frameworks.
  • Worked on large-scale, high-concurrent production serving.
  • Worked on testing, benchmarking, and reliability of inference services.

Interview Process

After submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15-minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews:

  1. Coding assessment in a language of your choice.
  2. Systems hands-on: Demonstrate practical skills in a live problem-solving session.
  3. Project deep-dive: Present your past exceptional work to a small audience.
  4. Meet and greet with the wider team.

Our goal is to finish the main process within one week. All interviews will be conducted via Google Meet.

Annual Salary Range

$180,000 - $440,000 USD

Benefits

Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short & long-term disability insurance, life insurance, and various other discounts and perks.

xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.

Locations

  • Palo Alto, CA,
  • San Francisco, CA,

Salary

180,000 - 440,000 USD / yearly

Skills Required

  • Pythonintermediate
  • Rustintermediate
  • PyTorchintermediate
  • JAXintermediate
  • CUDAintermediate
  • CUTLASSintermediate
  • Tritonintermediate
  • NCCLintermediate
  • Kubernetesintermediate
  • SGLangintermediate
  • system optimizations for model serving (batching, caching, load balancing, parallelism)intermediate
  • low-level optimizations for inference (GPU kernels, code generation)intermediate
  • algorithmic optimizations for inference (quantization, distillation, speculative decoding, low-precision numerics)intermediate
  • strong communication skillsintermediate
  • work ethicintermediate
  • strong prioritization skillsintermediate

Required Qualifications

  • Worked on system optimizations for model serving, such as batching, caching, load balancing, and parallelism. (experience)
  • Worked on low-level optimizations for inference, such as GPU kernels and code generation. (experience)
  • Worked on algorithmic optimizations for inference, such as quantization, distillation, and speculative decoding, and low-precision numerics. (experience)
  • Worked on large-scale inference engines or reinforcement learning frameworks. (experience)
  • Worked on large-scale, high-concurrent production serving. (experience)
  • Worked on testing, benchmarking, and reliability of inference services. (experience)

Responsibilities

  • Optimizing the latency and throughput of model inference.
  • Building reliable and performant production serving systems to serve billions of users.
  • Accelerating research on scaling test-time compute and rollout in reinforcement learning training.
  • Model-hardware co-design for next-generation architectures.

Benefits

  • general: equity
  • general: comprehensive medical, vision, and dental coverage
  • general: access to a 401(k) retirement plan
  • general: short & long-term disability insurance
  • general: life insurance
  • general: various other discounts and perks

Target Your Resume for "Member of Technical Staff, Inference" , xAI

Get personalized recommendations to optimize your resume specifically for Member of Technical Staff, Inference. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Member of Technical Staff, Inference" , xAI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Foundation ModelFoundation Model
Quiz Challenge

Answer 10 quick questions to check your fit for Member of Technical Staff, Inference @ xAI.

10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.