Resume and JobRESUME AND JOB
xAI logo

Software Engineer - Data Acquisition / Web Crawling

xAI

Software Engineer - Data Acquisition / Web Crawling

full-timePosted: Dec 29, 2025

Job Description

About xAI

xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.

About the Role

Join a cutting-edge Data Acquisition team at xAI, where you'll power the future of AI by building world-class systems to collect and process hundreds of petabytes of data across diverse modalities — web, code, images, audio, video, and beyond. As a Software Engineer specializing in Data Acquisition and Web Crawling, you'll architect and operate large-scale distributed systems that fuel groundbreaking models like Grok 3 and its successors, delivering the high-quality data that drives xAI's mission to understand the universe.

You'll work closely with pre-training, reasoning, multimodal, and other teams to meet their unique data needs, collaborating with engineers to define precise requirements and deploy large-scale classifiers for filtering and categorizing vast datasets. This is your opportunity to tackle complex, petabyte-scale challenges, pushing the boundaries of data engineering to create the foundation for the world's most advanced AI systems.

This role is for hands-on engineers who thrive on solving tough problems, working in a flat, fast-paced environment where initiative and excellence shape leadership. If you're passionate about building robust, high-throughput data pipelines and want to directly impact the evolution of transformative AI, this is your chance to shine.

What You'll Do

  • Building petabyte-scale, high-throughput data processing systems managing hundreds of petabytes to exabytes of data.
  • Designing and operating large-scale distributed systems and pipelines processing hundreds of thousands to millions of operations per second.
  • Managing workloads across large cloud compute clusters.
  • Pre-processing datasets for AI training.
  • Building and operating large-scale crawlers, gathering and communicating requirements clearly and concisely.

Who You Are

  • Strong engineering skills with a passion for improving different aspects of data and model performance.
  • Strong proficiency in at least one compiled language: Rust, Go, C++, or Java.
  • Has worked on one or more modalities other than text and demonstrated exceptional work.
  • Building bespoke data processing libraries from scratch.
  • Designing and implementing distributed systems in Rust.
  • Keeping up with state-of-the-art techniques for preparing AI training data.
  • Experience with performance optimization of large-scale systems is preferred.
  • Organizing and meticulously bookkeeping data across multiple clouds, of multiple modalities, and from many sources.
  • Experience with SQL/NoSQL databases, especially columnar databases, is a plus.
  • Great debugging skills are a must.
  • Must have deep knowledge of how the internet works, including DNS, OSI model, crawler architectures, challenges operating crawlers, and headless browsers.

Tech Stack

  • Python
  • Rust
  • Spark
  • Kubernetes

Interview Process

After submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15 minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews:

  1. Coding assessment in a language of your choice.
  2. Systems hands-on: Demonstrate practical skills in a live problem-solving session.
  3. Project deep-dive: Present your past exceptional work to a small audience.
  4. Meet and greet with the wider team.

Our goal is to finish the main process within one week. All interviews will be conducted via Google Meet.

Location

The role is based in the Bay Area [San Francisco and Palo Alto]. Candidates are expected to be located near the Bay Area or open to relocation.

Annual Salary Range

$180,000 - $440,000 USD

Benefits

Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short & long-term disability insurance, life insurance, and various other discounts and perks.

xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.

Locations

  • Palo Alto, CA,
  • San Francisco, CA,

Salary

180,000 - 440,000 USD / yearly

Skills Required

  • Rustintermediate
  • Gointermediate
  • C++intermediate
  • Javaintermediate
  • Pythonintermediate
  • Sparkintermediate
  • Kubernetesintermediate
  • distributed systemsintermediate
  • SQL/NoSQL databasesintermediate
  • columnar databasesintermediate
  • performance optimizationintermediate
  • debuggingintermediate
  • DNSintermediate
  • OSI modelintermediate
  • crawler architecturesintermediate
  • headless browsersintermediate

Required Qualifications

  • Strong engineering skills with a passion for improving different aspects of data and model performance (experience)
  • Strong proficiency in at least one compiled language: Rust, Go, C++, or Java (experience)
  • Has worked on one or more modalities other than text and demonstrated exceptional work (experience)
  • Building bespoke data processing libraries from scratch (experience)
  • Designing and implementing distributed systems in Rust (experience)
  • Keeping up with state-of-the-art techniques for preparing AI training data (experience)
  • Organizing and meticulously bookkeeping data across multiple clouds, of multiple modalities, and from many sources (experience)
  • Great debugging skills are a must (experience)
  • Must have deep knowledge of how the internet works, including DNS, OSI model, crawler architectures, challenges operating crawlers, and headless browsers (experience)

Preferred Qualifications

  • Experience with performance optimization of large-scale systems is preferred (experience)
  • Experience with SQL/NoSQL databases, especially columnar databases, is a plus (experience)

Responsibilities

  • Building petabyte-scale, high-throughput data processing systems managing hundreds of petabytes to exabytes of data
  • Designing and operating large-scale distributed systems and pipelines processing hundreds of thousands to millions of operations per second
  • Managing workloads across large cloud compute clusters
  • Pre-processing datasets for AI training
  • Building and operating large-scale crawlers, gathering and communicating requirements clearly and concisely

Benefits

  • general: equity
  • general: comprehensive medical, vision, and dental coverage
  • general: access to a 401(k) retirement plan
  • general: short & long-term disability insurance
  • general: life insurance
  • general: various other discounts and perks

Target Your Resume for "Software Engineer - Data Acquisition / Web Crawling" , xAI

Get personalized recommendations to optimize your resume specifically for Software Engineer - Data Acquisition / Web Crawling. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Software Engineer - Data Acquisition / Web Crawling" , xAI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Foundation ModelFoundation Model
Quiz Challenge

Answer 10 quick questions to check your fit for Software Engineer - Data Acquisition / Web Crawling @ xAI.

10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

xAI logo

Software Engineer - Data Acquisition / Web Crawling

xAI

Software Engineer - Data Acquisition / Web Crawling

full-timePosted: Dec 29, 2025

Job Description

About xAI

xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All engineers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.

About the Role

Join a cutting-edge Data Acquisition team at xAI, where you'll power the future of AI by building world-class systems to collect and process hundreds of petabytes of data across diverse modalities — web, code, images, audio, video, and beyond. As a Software Engineer specializing in Data Acquisition and Web Crawling, you'll architect and operate large-scale distributed systems that fuel groundbreaking models like Grok 3 and its successors, delivering the high-quality data that drives xAI's mission to understand the universe.

You'll work closely with pre-training, reasoning, multimodal, and other teams to meet their unique data needs, collaborating with engineers to define precise requirements and deploy large-scale classifiers for filtering and categorizing vast datasets. This is your opportunity to tackle complex, petabyte-scale challenges, pushing the boundaries of data engineering to create the foundation for the world's most advanced AI systems.

This role is for hands-on engineers who thrive on solving tough problems, working in a flat, fast-paced environment where initiative and excellence shape leadership. If you're passionate about building robust, high-throughput data pipelines and want to directly impact the evolution of transformative AI, this is your chance to shine.

What You'll Do

  • Building petabyte-scale, high-throughput data processing systems managing hundreds of petabytes to exabytes of data.
  • Designing and operating large-scale distributed systems and pipelines processing hundreds of thousands to millions of operations per second.
  • Managing workloads across large cloud compute clusters.
  • Pre-processing datasets for AI training.
  • Building and operating large-scale crawlers, gathering and communicating requirements clearly and concisely.

Who You Are

  • Strong engineering skills with a passion for improving different aspects of data and model performance.
  • Strong proficiency in at least one compiled language: Rust, Go, C++, or Java.
  • Has worked on one or more modalities other than text and demonstrated exceptional work.
  • Building bespoke data processing libraries from scratch.
  • Designing and implementing distributed systems in Rust.
  • Keeping up with state-of-the-art techniques for preparing AI training data.
  • Experience with performance optimization of large-scale systems is preferred.
  • Organizing and meticulously bookkeeping data across multiple clouds, of multiple modalities, and from many sources.
  • Experience with SQL/NoSQL databases, especially columnar databases, is a plus.
  • Great debugging skills are a must.
  • Must have deep knowledge of how the internet works, including DNS, OSI model, crawler architectures, challenges operating crawlers, and headless browsers.

Tech Stack

  • Python
  • Rust
  • Spark
  • Kubernetes

Interview Process

After submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15 minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews:

  1. Coding assessment in a language of your choice.
  2. Systems hands-on: Demonstrate practical skills in a live problem-solving session.
  3. Project deep-dive: Present your past exceptional work to a small audience.
  4. Meet and greet with the wider team.

Our goal is to finish the main process within one week. All interviews will be conducted via Google Meet.

Location

The role is based in the Bay Area [San Francisco and Palo Alto]. Candidates are expected to be located near the Bay Area or open to relocation.

Annual Salary Range

$180,000 - $440,000 USD

Benefits

Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short & long-term disability insurance, life insurance, and various other discounts and perks.

xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice.

Locations

  • Palo Alto, CA,
  • San Francisco, CA,

Salary

180,000 - 440,000 USD / yearly

Skills Required

  • Rustintermediate
  • Gointermediate
  • C++intermediate
  • Javaintermediate
  • Pythonintermediate
  • Sparkintermediate
  • Kubernetesintermediate
  • distributed systemsintermediate
  • SQL/NoSQL databasesintermediate
  • columnar databasesintermediate
  • performance optimizationintermediate
  • debuggingintermediate
  • DNSintermediate
  • OSI modelintermediate
  • crawler architecturesintermediate
  • headless browsersintermediate

Required Qualifications

  • Strong engineering skills with a passion for improving different aspects of data and model performance (experience)
  • Strong proficiency in at least one compiled language: Rust, Go, C++, or Java (experience)
  • Has worked on one or more modalities other than text and demonstrated exceptional work (experience)
  • Building bespoke data processing libraries from scratch (experience)
  • Designing and implementing distributed systems in Rust (experience)
  • Keeping up with state-of-the-art techniques for preparing AI training data (experience)
  • Organizing and meticulously bookkeeping data across multiple clouds, of multiple modalities, and from many sources (experience)
  • Great debugging skills are a must (experience)
  • Must have deep knowledge of how the internet works, including DNS, OSI model, crawler architectures, challenges operating crawlers, and headless browsers (experience)

Preferred Qualifications

  • Experience with performance optimization of large-scale systems is preferred (experience)
  • Experience with SQL/NoSQL databases, especially columnar databases, is a plus (experience)

Responsibilities

  • Building petabyte-scale, high-throughput data processing systems managing hundreds of petabytes to exabytes of data
  • Designing and operating large-scale distributed systems and pipelines processing hundreds of thousands to millions of operations per second
  • Managing workloads across large cloud compute clusters
  • Pre-processing datasets for AI training
  • Building and operating large-scale crawlers, gathering and communicating requirements clearly and concisely

Benefits

  • general: equity
  • general: comprehensive medical, vision, and dental coverage
  • general: access to a 401(k) retirement plan
  • general: short & long-term disability insurance
  • general: life insurance
  • general: various other discounts and perks

Target Your Resume for "Software Engineer - Data Acquisition / Web Crawling" , xAI

Get personalized recommendations to optimize your resume specifically for Software Engineer - Data Acquisition / Web Crawling. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Software Engineer - Data Acquisition / Web Crawling" , xAI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Foundation ModelFoundation Model
Quiz Challenge

Answer 10 quick questions to check your fit for Software Engineer - Data Acquisition / Web Crawling @ xAI.

10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.