Resume and JobRESUME AND JOB
xAI logo

Member of Technical Staff, Applied Inference

xAI

Member of Technical Staff, Applied Inference

full-timePosted: Dec 29, 2025

Job Description

About xAI

xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.

Tech Stack

  • Kubernetes
  • Buildkite / ArgoCD
  • Prometheus  / Grafana /  PagerDuty
  • Pulumi / Terraform
  • SGLang: This team is actively working on one of the most popular open-source inference engines, SGLang. You have the opportunity to work on open-source projects. 
  • Custom debugging and tracing tools

Focus

  • Architect and implement scalable distributed infrastructure for model serving, such as load balancing, auto scaling, batch scheduling, and global KVcache systems.
  • Ensure the reliability of inference services, targeting 100% uptime, a 0% error rate, and good tail performance, through proactive monitoring, fault-tolerant designs, and rigorous testing.
  • Create custom tools to trace, replay, and fix issues or crashes across the entire stack, from cluster orchestration to GPU kernels.
  • Benchmark and fine-tune inference engines to deliver optimal performance under diverse, production workloads.
  • Develop robust CI/CD infrastructure to enable seamless endpoint deployment, image publishing, feature rollouts, and inference engine updates.

Ideal Experiences

  • Worked on large-scale, high-concurrent production serving.
  • Worked on GPU inference engines.
  • Worked on testing, benchmarking, and the reliability of inference services.
  • Worked on designing and implementing CI/CD infrastructure.

Location

The role is based in the Bay Area [San Francisco and Palo Alto]. Candidates are expected to be located near the Bay Area or open to relocation.

Interview Process

After submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15-minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews:

  1. Coding assessment in a language of your choice.
  2. Systems hands-on: Demonstrate practical skills in a live problem-solving session.
  3. Project deep-dive: Present your past exceptional work to a small audience.
  4. Meet and greet with the wider team.

Our goal is to finish the main process within one week. All interviews will be conducted via Google Meet.

Annual Salary Range

$180,000 - $440,000 USD

Benefits

Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short & long-term disability insurance, life insurance, and various other discounts and perks.

xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.

Locations

  • Palo Alto, CA,
  • San Francisco, CA,

Salary

180,000 - 440,000 USD / yearly

Skills Required

  • Kubernetesintermediate
  • Buildkite / ArgoCDintermediate
  • Prometheus / Grafana / PagerDutyintermediate
  • Pulumi / Terraformintermediate
  • SGLangintermediate
  • Custom debugging and tracing toolsintermediate
  • large-scale, high-concurrent production servingintermediate
  • GPU inference enginesintermediate
  • testing, benchmarking, and reliability of inference servicesintermediate
  • designing and implementing CI/CD infrastructureintermediate
  • strong communication skillsintermediate
  • work ethic and strong prioritization skillsintermediate

Required Qualifications

  • Worked on large-scale, high-concurrent production serving. (experience)
  • Worked on GPU inference engines. (experience)
  • Worked on testing, benchmarking, and the reliability of inference services. (experience)
  • Worked on designing and implementing CI/CD infrastructure. (experience)

Responsibilities

  • Architect and implement scalable distributed infrastructure for model serving, such as load balancing, auto scaling, batch scheduling, and global KVcache systems.
  • Ensure the reliability of inference services, targeting 100% uptime, a 0% error rate, and good tail performance, through proactive monitoring, fault-tolerant designs, and rigorous testing.
  • Create custom tools to trace, replay, and fix issues or crashes across the entire stack, from cluster orchestration to GPU kernels.
  • Benchmark and fine-tune inference engines to deliver optimal performance under diverse, production workloads.
  • Develop robust CI/CD infrastructure to enable seamless endpoint deployment, image publishing, feature rollouts, and inference engine updates.

Benefits

  • general: equity
  • general: comprehensive medical, vision, and dental coverage
  • general: access to a 401(k) retirement plan
  • general: short & long-term disability insurance
  • general: life insurance
  • general: various other discounts and perks

Target Your Resume for "Member of Technical Staff, Applied Inference" , xAI

Get personalized recommendations to optimize your resume specifically for Member of Technical Staff, Applied Inference. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Member of Technical Staff, Applied Inference" , xAI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Foundation ModelFoundation Model
Quiz Challenge

Answer 10 quick questions to check your fit for Member of Technical Staff, Applied Inference @ xAI.

10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

xAI logo

Member of Technical Staff, Applied Inference

xAI

Member of Technical Staff, Applied Inference

full-timePosted: Dec 29, 2025

Job Description

About xAI

xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.

Tech Stack

  • Kubernetes
  • Buildkite / ArgoCD
  • Prometheus  / Grafana /  PagerDuty
  • Pulumi / Terraform
  • SGLang: This team is actively working on one of the most popular open-source inference engines, SGLang. You have the opportunity to work on open-source projects. 
  • Custom debugging and tracing tools

Focus

  • Architect and implement scalable distributed infrastructure for model serving, such as load balancing, auto scaling, batch scheduling, and global KVcache systems.
  • Ensure the reliability of inference services, targeting 100% uptime, a 0% error rate, and good tail performance, through proactive monitoring, fault-tolerant designs, and rigorous testing.
  • Create custom tools to trace, replay, and fix issues or crashes across the entire stack, from cluster orchestration to GPU kernels.
  • Benchmark and fine-tune inference engines to deliver optimal performance under diverse, production workloads.
  • Develop robust CI/CD infrastructure to enable seamless endpoint deployment, image publishing, feature rollouts, and inference engine updates.

Ideal Experiences

  • Worked on large-scale, high-concurrent production serving.
  • Worked on GPU inference engines.
  • Worked on testing, benchmarking, and the reliability of inference services.
  • Worked on designing and implementing CI/CD infrastructure.

Location

The role is based in the Bay Area [San Francisco and Palo Alto]. Candidates are expected to be located near the Bay Area or open to relocation.

Interview Process

After submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15-minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews:

  1. Coding assessment in a language of your choice.
  2. Systems hands-on: Demonstrate practical skills in a live problem-solving session.
  3. Project deep-dive: Present your past exceptional work to a small audience.
  4. Meet and greet with the wider team.

Our goal is to finish the main process within one week. All interviews will be conducted via Google Meet.

Annual Salary Range

$180,000 - $440,000 USD

Benefits

Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short & long-term disability insurance, life insurance, and various other discounts and perks.

xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.

Locations

  • Palo Alto, CA,
  • San Francisco, CA,

Salary

180,000 - 440,000 USD / yearly

Skills Required

  • Kubernetesintermediate
  • Buildkite / ArgoCDintermediate
  • Prometheus / Grafana / PagerDutyintermediate
  • Pulumi / Terraformintermediate
  • SGLangintermediate
  • Custom debugging and tracing toolsintermediate
  • large-scale, high-concurrent production servingintermediate
  • GPU inference enginesintermediate
  • testing, benchmarking, and reliability of inference servicesintermediate
  • designing and implementing CI/CD infrastructureintermediate
  • strong communication skillsintermediate
  • work ethic and strong prioritization skillsintermediate

Required Qualifications

  • Worked on large-scale, high-concurrent production serving. (experience)
  • Worked on GPU inference engines. (experience)
  • Worked on testing, benchmarking, and the reliability of inference services. (experience)
  • Worked on designing and implementing CI/CD infrastructure. (experience)

Responsibilities

  • Architect and implement scalable distributed infrastructure for model serving, such as load balancing, auto scaling, batch scheduling, and global KVcache systems.
  • Ensure the reliability of inference services, targeting 100% uptime, a 0% error rate, and good tail performance, through proactive monitoring, fault-tolerant designs, and rigorous testing.
  • Create custom tools to trace, replay, and fix issues or crashes across the entire stack, from cluster orchestration to GPU kernels.
  • Benchmark and fine-tune inference engines to deliver optimal performance under diverse, production workloads.
  • Develop robust CI/CD infrastructure to enable seamless endpoint deployment, image publishing, feature rollouts, and inference engine updates.

Benefits

  • general: equity
  • general: comprehensive medical, vision, and dental coverage
  • general: access to a 401(k) retirement plan
  • general: short & long-term disability insurance
  • general: life insurance
  • general: various other discounts and perks

Target Your Resume for "Member of Technical Staff, Applied Inference" , xAI

Get personalized recommendations to optimize your resume specifically for Member of Technical Staff, Applied Inference. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Member of Technical Staff, Applied Inference" , xAI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Foundation ModelFoundation Model
Quiz Challenge

Answer 10 quick questions to check your fit for Member of Technical Staff, Applied Inference @ xAI.

10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.