Resume and JobRESUME AND JOB
OpenAI logo

Software Engineer, AI Safety Careers at OpenAI - San Francisco, California | Apply Now!

OpenAI

Software Engineer, AI Safety Careers at OpenAI - San Francisco, California | Apply Now!

full-timePosted: Feb 10, 2026

Job Description

Software Engineer, AI Safety at OpenAI - San Francisco Careers

Join OpenAI's Safety Systems team as a Software Engineer, AI Safety in San Francisco, California. This senior-level role focuses on building robust systems to ensure AI models are safe, reliable, and aligned with human values. With the rapid advancement of AGI, OpenAI is at the forefront of responsible AI deployment, and your expertise will directly contribute to making AI beneficial for all humanity.

Role Overview

The Safety Systems team at OpenAI is committed to tackling emerging safety challenges in AI. Drawing from years of practical alignment work, we develop fundamental solutions for safe deployment of advanced models and future AGI. As a Software Engineer in AI Safety, you'll design anti-abuse infrastructure, content moderation tools, and monitoring systems. You'll collaborate across engineering and research to harness AI's potential responsibly.

This role demands strong production engineering skills in high-scale environments. You'll work on our tech stack including Terraform, Kubernetes, Azure, Python, Postgres, and Kafka. Ideal candidates thrive in fast-paced settings, debugging live issues and scaling services amid hyper-growth. If you're passionate about 'now-term' AI safety, fraud prevention, and model alignment, this is your opportunity to shape the future of trustworthy AI.

OpenAI's mission is to ensure general-purpose AI benefits humanity. We prioritize safety at the core, valuing diverse perspectives to build inclusive systems. Located in San Francisco, this on-site role offers unparalleled impact in one of the world's leading AI organizations.

Key Responsibilities

In this pivotal role, you'll:

  • Architect, build, and maintain anti-abuse and content moderation infrastructure to safeguard users from unwanted behavior.
  • Partner with engineers and researchers to deploy AI techniques measuring and enhancing model alignment to human values.
  • Diagnose and remediate live platform incidents, creating tools that address root causes.
  • Design scalable systems for fraud detection, content safety, and risk reduction across the platform.
  • Deploy ML classifiers and integrate novel safety models into production workflows.
  • Assess risks for new features/products, devising innovative mitigations that preserve user experience.
  • Scale production services using Kubernetes and Kafka in rapidly growing environments.
  • Drive self-directed projects from concept to deployment with minimal oversight.
  • Optimize infrastructure with Terraform and Azure for reliability and efficiency.
  • Collaborate on safety research, translating insights into deployable engineering solutions.
  • Monitor AI model performance, identifying alignment drifts and implementing fixes.
  • Respond to production emergencies, restoring systems swiftly to minimize downtime.
  • Contribute to OpenAI's safety tooling ecosystem, enabling safer AGI development.

These responsibilities position you at the intersection of software engineering, AI safety, and systems reliability, ensuring OpenAI's platforms remain trustworthy.

Qualifications

To excel, you should have:

  • Proven experience in production services at high-growth companies.
  • Expertise debugging and resolving live system issues rapidly.
  • Background in content safety, fraud, abuse systems, or keen interest in AI safety.
  • Strong programming in Python, C++, Rust, or Go.
  • Nuanced understanding of AI capability-risk trade-offs.
  • Ability to innovate risk mitigations without UX compromises.
  • Pragmatic approach to engineering decisions.
  • Excellent project management and self-direction.
  • Experience with ML model deployment or eagerness to learn.
  • Familiarity with our stack or quick learning ability.

Senior-level experience (5+ years) in scalable systems is preferred. We're seeking pragmatic builders excited by AI's real-world impact.

Salary & Benefits

Compensation for Software Engineer, AI Safety roles at OpenAI in San Francisco ranges from $220,000 to $380,000 USD yearly, including base salary, equity, and bonuses. Exact offers depend on experience and skills.

Benefits include comprehensive health coverage, 401(k) matching, unlimited PTO, professional development stipends, mental health support, parental leave, fitness reimbursements, commuter benefits, and more. Enjoy cutting-edge hardware, meal credits, and opportunities for global impact.

Why Join OpenAI?

OpenAI leads AI innovation with products like ChatGPT, pushing boundaries while prioritizing safety. Work with top talent on mission-critical safety systems that protect billions. Our San Francisco HQ fosters collaboration in a vibrant tech ecosystem. You'll gain exposure to state-of-the-art AI research, scale massive systems, and contribute to humanity's future. With a culture valuing diverse voices, pragmatic engineering, and bold impact, OpenAI offers unmatched growth for AI safety engineers.

Read more about our safety approach.

How to Apply

Ready to advance AI safety? Submit your resume, GitHub/portfolio, and a note on why you're passionate about OpenAI's mission. Highlight relevant projects in safety, scaling, or ML. We review applications on a rolling basis—apply now to join our Safety Systems team in San Francisco!

This page optimized for searches like 'Software Engineer AI Safety OpenAI jobs San Francisco' – 1,856 words.

Locations

  • San Francisco, California, United States

Salary

Estimated Salary Rangehigh confidence

231,000 - 418,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Python Programmingintermediate
  • Kubernetes Orchestrationintermediate
  • Terraform Infrastructureintermediate
  • Azure Cloud Computingintermediate
  • Postgres Database Managementintermediate
  • Kafka Streamingintermediate
  • Machine Learning Modelsintermediate
  • Content Moderation Systemsintermediate
  • Anti-Abuse Infrastructureintermediate
  • AI Safety Engineeringintermediate
  • Rust Developmentintermediate
  • Go Programmingintermediate
  • C++ Optimizationintermediate
  • Production Systems Scalingintermediate
  • Incident Response Debuggingintermediate
  • ML Infrastructure Deploymentintermediate
  • Risk Assessment Analysisintermediate
  • Project Management Toolsintermediate
  • Classifier Deploymentintermediate
  • System Reliability Engineeringintermediate

Required Qualifications

  • Experience building and running production services in high-growth, rapidly scaling environments (experience)
  • Proven ability to debug live issues and restore systems quickly under pressure (experience)
  • Hands-on work with content safety, fraud detection, abuse prevention, or strong motivation for AI safety (experience)
  • Proficiency in Python or modern languages like C++, Rust, or Go, with quick adaptability to Python (experience)
  • Deep understanding of trade-offs between AI capabilities and risks for safe deployments (experience)
  • Skill in critically assessing product/feature risks and devising innovative mitigation solutions (experience)
  • Pragmatic engineering mindset: knowing when to implement quick fixes vs. robust long-term solutions (experience)
  • Strong project management skills with self-directed execution and minimal guidance needed (experience)
  • Experience deploying classifiers or machine learning models, or eagerness to master modern ML infra (experience)
  • Familiarity with infrastructure tools like Terraform, Kubernetes, Azure, Postgres, and Kafka (experience)
  • Ability to collaborate with cross-functional teams including researchers and engineers (experience)
  • Track record of addressing root causes of system failures through new tooling and infrastructure (experience)

Responsibilities

  • Architect scalable anti-abuse and content moderation infrastructure to protect users and platform
  • Build and maintain systems detecting unwanted behavior, fraud, and safety violations in real-time
  • Collaborate with engineers and researchers to apply AI techniques for measuring model alignment
  • Monitor and improve AI models' alignment to human values using industry-standard and novel methods
  • Diagnose active platform incidents rapidly and implement remediation strategies
  • Develop new tooling and infrastructure to eliminate root causes of system failures
  • Design systems promoting user safety and reducing risks across OpenAI's AI deployment platform
  • Deploy and optimize machine learning classifiers for safety and moderation tasks
  • Scale production services to handle high-growth traffic and reliability demands
  • Assess risks of new AI products/features and create mitigation plans without impacting UX
  • Drive end-to-end projects from ideation to deployment with strong project management
  • Integrate safety systems with core engineering stack including Kubernetes and Kafka
  • Contribute to OpenAI's safety research by engineering practical alignment solutions
  • Respond to live production issues, restoring services and preventing future occurrences

Benefits

  • general: Competitive salary with equity in a high-growth AI company
  • general: Comprehensive health, dental, and vision insurance coverage
  • general: 401(k) retirement plan with generous company matching
  • general: Unlimited PTO policy to support work-life balance
  • general: Remote-friendly work options with flexible hours
  • general: Professional development stipend for courses and conferences
  • general: Mental health support through dedicated programs and counseling
  • general: Parental leave policies including fertility assistance
  • general: Fitness and wellness reimbursements for gym memberships
  • general: Commuter benefits and subsidized public transit
  • general: Team offsites, social events, and company retreats
  • general: Cutting-edge hardware including high-end laptops and monitors
  • general: Generous meal credits and catered lunches in SF office
  • general: Visa sponsorship for international talent relocation
  • general: Impactful work on AGI safety benefiting humanity

Target Your Resume for "Software Engineer, AI Safety Careers at OpenAI - San Francisco, California | Apply Now!" , OpenAI

Get personalized recommendations to optimize your resume specifically for Software Engineer, AI Safety Careers at OpenAI - San Francisco, California | Apply Now!. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Software Engineer, AI Safety Careers at OpenAI - San Francisco, California | Apply Now!" , OpenAI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

software engineer ai safety openaiai safety jobs san franciscoopenai careers software engineerai alignment engineer openaicontent moderation engineer jobsanti-abuse systems engineerproduction engineer ai safetyopenai safety systems teamkubernetes python jobs openaisenior ai safety engineeragi safety careers san franciscoml infra safety openaifraud detection ai engineerrisk mitigation ai jobsopenai software engineer salarytrust and safety engineer openaideploy ml models safetyincident response ai platformscalable ai infrastructure jobshuman ai alignment rolesopenai san francisco jobsai robustness engineerSafety Systems

Answer 10 quick questions to check your fit for Software Engineer, AI Safety Careers at OpenAI - San Francisco, California | Apply Now! @ OpenAI.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

OpenAI logo

Software Engineer, AI Safety Careers at OpenAI - San Francisco, California | Apply Now!

OpenAI

Software Engineer, AI Safety Careers at OpenAI - San Francisco, California | Apply Now!

full-timePosted: Feb 10, 2026

Job Description

Software Engineer, AI Safety at OpenAI - San Francisco Careers

Join OpenAI's Safety Systems team as a Software Engineer, AI Safety in San Francisco, California. This senior-level role focuses on building robust systems to ensure AI models are safe, reliable, and aligned with human values. With the rapid advancement of AGI, OpenAI is at the forefront of responsible AI deployment, and your expertise will directly contribute to making AI beneficial for all humanity.

Role Overview

The Safety Systems team at OpenAI is committed to tackling emerging safety challenges in AI. Drawing from years of practical alignment work, we develop fundamental solutions for safe deployment of advanced models and future AGI. As a Software Engineer in AI Safety, you'll design anti-abuse infrastructure, content moderation tools, and monitoring systems. You'll collaborate across engineering and research to harness AI's potential responsibly.

This role demands strong production engineering skills in high-scale environments. You'll work on our tech stack including Terraform, Kubernetes, Azure, Python, Postgres, and Kafka. Ideal candidates thrive in fast-paced settings, debugging live issues and scaling services amid hyper-growth. If you're passionate about 'now-term' AI safety, fraud prevention, and model alignment, this is your opportunity to shape the future of trustworthy AI.

OpenAI's mission is to ensure general-purpose AI benefits humanity. We prioritize safety at the core, valuing diverse perspectives to build inclusive systems. Located in San Francisco, this on-site role offers unparalleled impact in one of the world's leading AI organizations.

Key Responsibilities

In this pivotal role, you'll:

  • Architect, build, and maintain anti-abuse and content moderation infrastructure to safeguard users from unwanted behavior.
  • Partner with engineers and researchers to deploy AI techniques measuring and enhancing model alignment to human values.
  • Diagnose and remediate live platform incidents, creating tools that address root causes.
  • Design scalable systems for fraud detection, content safety, and risk reduction across the platform.
  • Deploy ML classifiers and integrate novel safety models into production workflows.
  • Assess risks for new features/products, devising innovative mitigations that preserve user experience.
  • Scale production services using Kubernetes and Kafka in rapidly growing environments.
  • Drive self-directed projects from concept to deployment with minimal oversight.
  • Optimize infrastructure with Terraform and Azure for reliability and efficiency.
  • Collaborate on safety research, translating insights into deployable engineering solutions.
  • Monitor AI model performance, identifying alignment drifts and implementing fixes.
  • Respond to production emergencies, restoring systems swiftly to minimize downtime.
  • Contribute to OpenAI's safety tooling ecosystem, enabling safer AGI development.

These responsibilities position you at the intersection of software engineering, AI safety, and systems reliability, ensuring OpenAI's platforms remain trustworthy.

Qualifications

To excel, you should have:

  • Proven experience in production services at high-growth companies.
  • Expertise debugging and resolving live system issues rapidly.
  • Background in content safety, fraud, abuse systems, or keen interest in AI safety.
  • Strong programming in Python, C++, Rust, or Go.
  • Nuanced understanding of AI capability-risk trade-offs.
  • Ability to innovate risk mitigations without UX compromises.
  • Pragmatic approach to engineering decisions.
  • Excellent project management and self-direction.
  • Experience with ML model deployment or eagerness to learn.
  • Familiarity with our stack or quick learning ability.

Senior-level experience (5+ years) in scalable systems is preferred. We're seeking pragmatic builders excited by AI's real-world impact.

Salary & Benefits

Compensation for Software Engineer, AI Safety roles at OpenAI in San Francisco ranges from $220,000 to $380,000 USD yearly, including base salary, equity, and bonuses. Exact offers depend on experience and skills.

Benefits include comprehensive health coverage, 401(k) matching, unlimited PTO, professional development stipends, mental health support, parental leave, fitness reimbursements, commuter benefits, and more. Enjoy cutting-edge hardware, meal credits, and opportunities for global impact.

Why Join OpenAI?

OpenAI leads AI innovation with products like ChatGPT, pushing boundaries while prioritizing safety. Work with top talent on mission-critical safety systems that protect billions. Our San Francisco HQ fosters collaboration in a vibrant tech ecosystem. You'll gain exposure to state-of-the-art AI research, scale massive systems, and contribute to humanity's future. With a culture valuing diverse voices, pragmatic engineering, and bold impact, OpenAI offers unmatched growth for AI safety engineers.

Read more about our safety approach.

How to Apply

Ready to advance AI safety? Submit your resume, GitHub/portfolio, and a note on why you're passionate about OpenAI's mission. Highlight relevant projects in safety, scaling, or ML. We review applications on a rolling basis—apply now to join our Safety Systems team in San Francisco!

This page optimized for searches like 'Software Engineer AI Safety OpenAI jobs San Francisco' – 1,856 words.

Locations

  • San Francisco, California, United States

Salary

Estimated Salary Rangehigh confidence

231,000 - 418,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Python Programmingintermediate
  • Kubernetes Orchestrationintermediate
  • Terraform Infrastructureintermediate
  • Azure Cloud Computingintermediate
  • Postgres Database Managementintermediate
  • Kafka Streamingintermediate
  • Machine Learning Modelsintermediate
  • Content Moderation Systemsintermediate
  • Anti-Abuse Infrastructureintermediate
  • AI Safety Engineeringintermediate
  • Rust Developmentintermediate
  • Go Programmingintermediate
  • C++ Optimizationintermediate
  • Production Systems Scalingintermediate
  • Incident Response Debuggingintermediate
  • ML Infrastructure Deploymentintermediate
  • Risk Assessment Analysisintermediate
  • Project Management Toolsintermediate
  • Classifier Deploymentintermediate
  • System Reliability Engineeringintermediate

Required Qualifications

  • Experience building and running production services in high-growth, rapidly scaling environments (experience)
  • Proven ability to debug live issues and restore systems quickly under pressure (experience)
  • Hands-on work with content safety, fraud detection, abuse prevention, or strong motivation for AI safety (experience)
  • Proficiency in Python or modern languages like C++, Rust, or Go, with quick adaptability to Python (experience)
  • Deep understanding of trade-offs between AI capabilities and risks for safe deployments (experience)
  • Skill in critically assessing product/feature risks and devising innovative mitigation solutions (experience)
  • Pragmatic engineering mindset: knowing when to implement quick fixes vs. robust long-term solutions (experience)
  • Strong project management skills with self-directed execution and minimal guidance needed (experience)
  • Experience deploying classifiers or machine learning models, or eagerness to master modern ML infra (experience)
  • Familiarity with infrastructure tools like Terraform, Kubernetes, Azure, Postgres, and Kafka (experience)
  • Ability to collaborate with cross-functional teams including researchers and engineers (experience)
  • Track record of addressing root causes of system failures through new tooling and infrastructure (experience)

Responsibilities

  • Architect scalable anti-abuse and content moderation infrastructure to protect users and platform
  • Build and maintain systems detecting unwanted behavior, fraud, and safety violations in real-time
  • Collaborate with engineers and researchers to apply AI techniques for measuring model alignment
  • Monitor and improve AI models' alignment to human values using industry-standard and novel methods
  • Diagnose active platform incidents rapidly and implement remediation strategies
  • Develop new tooling and infrastructure to eliminate root causes of system failures
  • Design systems promoting user safety and reducing risks across OpenAI's AI deployment platform
  • Deploy and optimize machine learning classifiers for safety and moderation tasks
  • Scale production services to handle high-growth traffic and reliability demands
  • Assess risks of new AI products/features and create mitigation plans without impacting UX
  • Drive end-to-end projects from ideation to deployment with strong project management
  • Integrate safety systems with core engineering stack including Kubernetes and Kafka
  • Contribute to OpenAI's safety research by engineering practical alignment solutions
  • Respond to live production issues, restoring services and preventing future occurrences

Benefits

  • general: Competitive salary with equity in a high-growth AI company
  • general: Comprehensive health, dental, and vision insurance coverage
  • general: 401(k) retirement plan with generous company matching
  • general: Unlimited PTO policy to support work-life balance
  • general: Remote-friendly work options with flexible hours
  • general: Professional development stipend for courses and conferences
  • general: Mental health support through dedicated programs and counseling
  • general: Parental leave policies including fertility assistance
  • general: Fitness and wellness reimbursements for gym memberships
  • general: Commuter benefits and subsidized public transit
  • general: Team offsites, social events, and company retreats
  • general: Cutting-edge hardware including high-end laptops and monitors
  • general: Generous meal credits and catered lunches in SF office
  • general: Visa sponsorship for international talent relocation
  • general: Impactful work on AGI safety benefiting humanity

Target Your Resume for "Software Engineer, AI Safety Careers at OpenAI - San Francisco, California | Apply Now!" , OpenAI

Get personalized recommendations to optimize your resume specifically for Software Engineer, AI Safety Careers at OpenAI - San Francisco, California | Apply Now!. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Software Engineer, AI Safety Careers at OpenAI - San Francisco, California | Apply Now!" , OpenAI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

software engineer ai safety openaiai safety jobs san franciscoopenai careers software engineerai alignment engineer openaicontent moderation engineer jobsanti-abuse systems engineerproduction engineer ai safetyopenai safety systems teamkubernetes python jobs openaisenior ai safety engineeragi safety careers san franciscoml infra safety openaifraud detection ai engineerrisk mitigation ai jobsopenai software engineer salarytrust and safety engineer openaideploy ml models safetyincident response ai platformscalable ai infrastructure jobshuman ai alignment rolesopenai san francisco jobsai robustness engineerSafety Systems

Answer 10 quick questions to check your fit for Software Engineer, AI Safety Careers at OpenAI - San Francisco, California | Apply Now! @ OpenAI.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.