Resume and JobRESUME AND JOB
OpenAI logo

Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now!

OpenAI

Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now!

full-timePosted: Feb 10, 2026

Job Description

Researcher, Alignment Careers at OpenAI - San Francisco, CA | Apply Now!

Role Overview

Join OpenAI's Alignment team as a Researcher, Alignment in San Francisco, California, and become a pivotal force in ensuring AI systems are safe, trustworthy, and aligned with human values. This senior-level role is perfect for PhD-level experts in AI safety, machine learning, and cognitive science who thrive in fast-paced, collaborative environments. At OpenAI, you'll tackle the most pressing challenges in AI alignment, developing methodologies that allow AI to robustly follow human intent—even in adversarial or high-stakes scenarios.

The Alignment team focuses on two core pillars: (1) scaling alignment techniques alongside growing AI capabilities, and (2) centering humans through intuitive interfaces for intent expression and oversight. As capabilities advance, your work will ensure our models remain reliable in complex real-world deployments. This hybrid position (3 days/week in-office) offers relocation assistance and positions you at the forefront of AI research that shapes humanity's future.

Ideal candidates are team players with strong engineering skills in PyTorch, experience in scalable oversight, and a passion for trustworthy AI. If you're ready to design experiments measuring subjective alignment risks, build robustness tools, and pioneer human-AI paradigms, this Researcher, Alignment job at OpenAI is your chance to make history.

Key Responsibilities

As a Researcher, Alignment at OpenAI, your contributions will directly impact AI safety. Here's what you'll do daily:

  • Develop and evaluate alignment capabilities for subjective, context-dependent challenges hard to quantify.
  • Design rigorous evaluations to measure AI risks and alignment with human values.
  • Build specialized tools to test model robustness across diverse scenarios.
  • Engineer experiments exploring alignment scaling with compute, data volume, context/action lengths, and adversary resources.
  • Innovate Human-AI interaction paradigms and scalable oversight methods for complex supervision.
  • Train models to calibrate accurately on correctness and risk predictions.
  • Pioneer novel AI-assisted approaches in alignment research.
  • Implement scalable solutions integrating human oversight into advanced AI decisions.
  • Analyze how alignment holds up as AI capabilities surge.
  • Prototype interfaces enabling humans to express intent effectively.
  • Quantify deployment risks in high-stakes environments.
  • Collaborate cross-functionally to iterate on cutting-edge methodologies.
  • Conduct adversarial testing to ensure model reliability.

These responsibilities demand versatility, from hands-on coding to theoretical research, in a dynamic San Francisco-based team.

Qualifications

To excel as a Researcher, Alignment at OpenAI, bring these qualifications:

  • PhD or equivalent in computer science, computational science, data science, cognitive science, or allied fields.
  • Expert engineering in large-scale ML systems, especially PyTorch optimization.
  • Deep knowledge of alignment algorithms, techniques, and underlying science.
  • Proficiency in data viz/collection tools (TypeScript, Python).
  • Thrives in fast-paced, collaborative research settings.
  • Passion for safe, reliable AI in high-stakes contexts.
  • Experience designing scalable oversight and HAI paradigms.
  • Track record in risk evaluation and model calibration.
  • Strong experiment design for scaling laws in AI.
  • Team-oriented mindset for diverse tasks.
  • Familiarity with adversarial robustness testing.

Senior-level experience preferred; OpenAI seeks innovators ready to redefine AI safety.

Salary & Benefits

OpenAI offers competitive compensation for Researcher, Alignment roles, estimated at $250,000–$450,000 USD yearly (including base, bonus, equity), reflecting San Francisco market rates for top AI talent. Total comp varies by experience.

Benefits include:

  • Equity in a leading AI firm.
  • Full health/dental/vision coverage.
  • Hybrid SF model + relocation aid.
  • Unlimited PTO and flexible policy.
  • Generous parental leave.
  • 401(k) matching.
  • Professional growth stipend.
  • Wellness programs, gym access.
  • Catered meals and events.
  • Commuter subsidies.
  • Stock purchase options.
  • Impact-driven culture.

Why Join OpenAI?

OpenAI is the vanguard of AGI development, committed to benefiting humanity. Our Alignment team leads in solving AI's toughest safety puzzles. Work with world-class researchers in San Francisco, influencing models deployed globally. Enjoy hybrid flexibility, top-tier perks, and the mission to make AI trustworthy. This Researcher, Alignment role offers unparalleled impact—join us to align the future.

Candidates rave about OpenAI's innovative culture, resources, and real-world difference. With rapid growth, your work scales instantly.

How to Apply

Ready for Researcher, Alignment at OpenAI? Submit your resume, PhD details, and alignment research portfolio via our careers page. Highlight PyTorch projects, scaling experiments, and safety contributions. Interviews include technical deep-dives and team fits. OpenAI is equal opportunity—diversity drives innovation. Apply now for San Francisco hybrid role!

Locations

  • San Francisco, California, United States

Salary

Estimated Salary Rangehigh confidence

262,500 - 495,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • AI Alignmentintermediate
  • Machine Learningintermediate
  • PyTorchintermediate
  • Scalable Oversightintermediate
  • Human-AI Interactionintermediate
  • Risk Evaluationintermediate
  • Model Robustnessintermediate
  • Data Visualizationintermediate
  • TypeScriptintermediate
  • Pythonintermediate
  • Large-Scale ML Systemsintermediate
  • Computational Scienceintermediate
  • Cognitive Scienceintermediate
  • Experiment Designintermediate
  • Adversarial Testingintermediate
  • Model Calibrationintermediate
  • Deep Learningintermediate
  • Research Engineeringintermediate

Required Qualifications

  • PhD or equivalent experience in computer science, computational science, data science, cognitive science, or related fields (experience)
  • Strong engineering skills in designing and optimizing large-scale machine learning systems (e.g., PyTorch) (experience)
  • Deep understanding of the science behind alignment algorithms and techniques (experience)
  • Proficiency in developing data visualization or data collection interfaces (e.g., TypeScript, Python) (experience)
  • Experience in fast-paced, collaborative, cutting-edge research environments (experience)
  • Proven track record in AI safety and trustworthiness research (experience)
  • Ability to design experiments for scaling alignment with compute, data, and context length (experience)
  • Skills in building tools for model robustness testing (experience)
  • Expertise in human-AI interaction paradigms (experience)
  • Demonstrated ability to train models for calibration on correctness and risk (experience)
  • Team player willing to handle diverse tasks to advance team goals (experience)
  • Passion for developing safe, reliable AI in high-stakes scenarios (experience)

Responsibilities

  • Develop and evaluate alignment capabilities that are subjective, context-dependent, and hard to measure
  • Design evaluations to reliably measure risks and alignment with human intent and values
  • Build tools and evaluations to study and test model robustness in different situations
  • Design experiments to understand how alignment scales as a function of compute, data, context length, action length, and adversary resources
  • Design and evaluate new Human-AI-interaction paradigms and scalable oversight methods
  • Train models to be calibrated on correctness and risk assessment
  • Design novel approaches for using AI in alignment research
  • Implement scalable solutions for AI alignment as capabilities grow
  • Integrate human oversight into AI decision-making processes
  • Conduct research on harnessing improved AI capabilities into alignment techniques
  • Create interfaces for humans to express intent and supervise AIs in complex situations
  • Analyze and quantify AI risks in real-world deployment scenarios
  • Collaborate with cross-functional teams to advance alignment methodologies
  • Prototype and iterate on alignment experiments in adversarial environments

Benefits

  • general: Competitive salary with equity in a high-growth AI company
  • general: Comprehensive health, dental, and vision insurance
  • general: Hybrid work model: 3 days in office per week in San Francisco
  • general: Relocation assistance for new employees
  • general: Generous paid time off and flexible vacation policy
  • general: Parental leave and family planning benefits
  • general: 401(k) matching and retirement savings plans
  • general: Professional development stipend for conferences and courses
  • general: On-site gym, wellness programs, and mental health support
  • general: Catered meals, snacks, and team-building events
  • general: Cutting-edge research environment with top talent
  • general: Impactful work on AI safety benefiting humanity
  • general: Stock options and employee stock purchase program
  • general: Commuter benefits and subsidized public transit

Target Your Resume for "Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now!" , OpenAI

Get personalized recommendations to optimize your resume specifically for Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now!. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now!" , OpenAI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

OpenAI alignment researcher jobsAI alignment careers San FranciscoResearch engineer alignment OpenAIAI safety research scientistScalable oversight jobs OpenAIHuman AI interaction researcherPyTorch alignment engineerAI risk evaluation specialistModel robustness researcher SFPhD AI alignment jobsOpenAI researcher alignment salaryAI scaling laws researchAdversarial AI testing careersCognitive science AI jobsSan Francisco AI safety rolesHybrid AI research positionsOpenAI alignment team applyTrustworthy AI researcherHigh stakes AI alignmentOpenAI PhD researcher jobsMachine learning alignment SFAI calibration training jobsResearch

Answer 10 quick questions to check your fit for Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now! @ OpenAI.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

OpenAI logo

Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now!

OpenAI

Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now!

full-timePosted: Feb 10, 2026

Job Description

Researcher, Alignment Careers at OpenAI - San Francisco, CA | Apply Now!

Role Overview

Join OpenAI's Alignment team as a Researcher, Alignment in San Francisco, California, and become a pivotal force in ensuring AI systems are safe, trustworthy, and aligned with human values. This senior-level role is perfect for PhD-level experts in AI safety, machine learning, and cognitive science who thrive in fast-paced, collaborative environments. At OpenAI, you'll tackle the most pressing challenges in AI alignment, developing methodologies that allow AI to robustly follow human intent—even in adversarial or high-stakes scenarios.

The Alignment team focuses on two core pillars: (1) scaling alignment techniques alongside growing AI capabilities, and (2) centering humans through intuitive interfaces for intent expression and oversight. As capabilities advance, your work will ensure our models remain reliable in complex real-world deployments. This hybrid position (3 days/week in-office) offers relocation assistance and positions you at the forefront of AI research that shapes humanity's future.

Ideal candidates are team players with strong engineering skills in PyTorch, experience in scalable oversight, and a passion for trustworthy AI. If you're ready to design experiments measuring subjective alignment risks, build robustness tools, and pioneer human-AI paradigms, this Researcher, Alignment job at OpenAI is your chance to make history.

Key Responsibilities

As a Researcher, Alignment at OpenAI, your contributions will directly impact AI safety. Here's what you'll do daily:

  • Develop and evaluate alignment capabilities for subjective, context-dependent challenges hard to quantify.
  • Design rigorous evaluations to measure AI risks and alignment with human values.
  • Build specialized tools to test model robustness across diverse scenarios.
  • Engineer experiments exploring alignment scaling with compute, data volume, context/action lengths, and adversary resources.
  • Innovate Human-AI interaction paradigms and scalable oversight methods for complex supervision.
  • Train models to calibrate accurately on correctness and risk predictions.
  • Pioneer novel AI-assisted approaches in alignment research.
  • Implement scalable solutions integrating human oversight into advanced AI decisions.
  • Analyze how alignment holds up as AI capabilities surge.
  • Prototype interfaces enabling humans to express intent effectively.
  • Quantify deployment risks in high-stakes environments.
  • Collaborate cross-functionally to iterate on cutting-edge methodologies.
  • Conduct adversarial testing to ensure model reliability.

These responsibilities demand versatility, from hands-on coding to theoretical research, in a dynamic San Francisco-based team.

Qualifications

To excel as a Researcher, Alignment at OpenAI, bring these qualifications:

  • PhD or equivalent in computer science, computational science, data science, cognitive science, or allied fields.
  • Expert engineering in large-scale ML systems, especially PyTorch optimization.
  • Deep knowledge of alignment algorithms, techniques, and underlying science.
  • Proficiency in data viz/collection tools (TypeScript, Python).
  • Thrives in fast-paced, collaborative research settings.
  • Passion for safe, reliable AI in high-stakes contexts.
  • Experience designing scalable oversight and HAI paradigms.
  • Track record in risk evaluation and model calibration.
  • Strong experiment design for scaling laws in AI.
  • Team-oriented mindset for diverse tasks.
  • Familiarity with adversarial robustness testing.

Senior-level experience preferred; OpenAI seeks innovators ready to redefine AI safety.

Salary & Benefits

OpenAI offers competitive compensation for Researcher, Alignment roles, estimated at $250,000–$450,000 USD yearly (including base, bonus, equity), reflecting San Francisco market rates for top AI talent. Total comp varies by experience.

Benefits include:

  • Equity in a leading AI firm.
  • Full health/dental/vision coverage.
  • Hybrid SF model + relocation aid.
  • Unlimited PTO and flexible policy.
  • Generous parental leave.
  • 401(k) matching.
  • Professional growth stipend.
  • Wellness programs, gym access.
  • Catered meals and events.
  • Commuter subsidies.
  • Stock purchase options.
  • Impact-driven culture.

Why Join OpenAI?

OpenAI is the vanguard of AGI development, committed to benefiting humanity. Our Alignment team leads in solving AI's toughest safety puzzles. Work with world-class researchers in San Francisco, influencing models deployed globally. Enjoy hybrid flexibility, top-tier perks, and the mission to make AI trustworthy. This Researcher, Alignment role offers unparalleled impact—join us to align the future.

Candidates rave about OpenAI's innovative culture, resources, and real-world difference. With rapid growth, your work scales instantly.

How to Apply

Ready for Researcher, Alignment at OpenAI? Submit your resume, PhD details, and alignment research portfolio via our careers page. Highlight PyTorch projects, scaling experiments, and safety contributions. Interviews include technical deep-dives and team fits. OpenAI is equal opportunity—diversity drives innovation. Apply now for San Francisco hybrid role!

Locations

  • San Francisco, California, United States

Salary

Estimated Salary Rangehigh confidence

262,500 - 495,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • AI Alignmentintermediate
  • Machine Learningintermediate
  • PyTorchintermediate
  • Scalable Oversightintermediate
  • Human-AI Interactionintermediate
  • Risk Evaluationintermediate
  • Model Robustnessintermediate
  • Data Visualizationintermediate
  • TypeScriptintermediate
  • Pythonintermediate
  • Large-Scale ML Systemsintermediate
  • Computational Scienceintermediate
  • Cognitive Scienceintermediate
  • Experiment Designintermediate
  • Adversarial Testingintermediate
  • Model Calibrationintermediate
  • Deep Learningintermediate
  • Research Engineeringintermediate

Required Qualifications

  • PhD or equivalent experience in computer science, computational science, data science, cognitive science, or related fields (experience)
  • Strong engineering skills in designing and optimizing large-scale machine learning systems (e.g., PyTorch) (experience)
  • Deep understanding of the science behind alignment algorithms and techniques (experience)
  • Proficiency in developing data visualization or data collection interfaces (e.g., TypeScript, Python) (experience)
  • Experience in fast-paced, collaborative, cutting-edge research environments (experience)
  • Proven track record in AI safety and trustworthiness research (experience)
  • Ability to design experiments for scaling alignment with compute, data, and context length (experience)
  • Skills in building tools for model robustness testing (experience)
  • Expertise in human-AI interaction paradigms (experience)
  • Demonstrated ability to train models for calibration on correctness and risk (experience)
  • Team player willing to handle diverse tasks to advance team goals (experience)
  • Passion for developing safe, reliable AI in high-stakes scenarios (experience)

Responsibilities

  • Develop and evaluate alignment capabilities that are subjective, context-dependent, and hard to measure
  • Design evaluations to reliably measure risks and alignment with human intent and values
  • Build tools and evaluations to study and test model robustness in different situations
  • Design experiments to understand how alignment scales as a function of compute, data, context length, action length, and adversary resources
  • Design and evaluate new Human-AI-interaction paradigms and scalable oversight methods
  • Train models to be calibrated on correctness and risk assessment
  • Design novel approaches for using AI in alignment research
  • Implement scalable solutions for AI alignment as capabilities grow
  • Integrate human oversight into AI decision-making processes
  • Conduct research on harnessing improved AI capabilities into alignment techniques
  • Create interfaces for humans to express intent and supervise AIs in complex situations
  • Analyze and quantify AI risks in real-world deployment scenarios
  • Collaborate with cross-functional teams to advance alignment methodologies
  • Prototype and iterate on alignment experiments in adversarial environments

Benefits

  • general: Competitive salary with equity in a high-growth AI company
  • general: Comprehensive health, dental, and vision insurance
  • general: Hybrid work model: 3 days in office per week in San Francisco
  • general: Relocation assistance for new employees
  • general: Generous paid time off and flexible vacation policy
  • general: Parental leave and family planning benefits
  • general: 401(k) matching and retirement savings plans
  • general: Professional development stipend for conferences and courses
  • general: On-site gym, wellness programs, and mental health support
  • general: Catered meals, snacks, and team-building events
  • general: Cutting-edge research environment with top talent
  • general: Impactful work on AI safety benefiting humanity
  • general: Stock options and employee stock purchase program
  • general: Commuter benefits and subsidized public transit

Target Your Resume for "Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now!" , OpenAI

Get personalized recommendations to optimize your resume specifically for Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now!. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now!" , OpenAI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

OpenAI alignment researcher jobsAI alignment careers San FranciscoResearch engineer alignment OpenAIAI safety research scientistScalable oversight jobs OpenAIHuman AI interaction researcherPyTorch alignment engineerAI risk evaluation specialistModel robustness researcher SFPhD AI alignment jobsOpenAI researcher alignment salaryAI scaling laws researchAdversarial AI testing careersCognitive science AI jobsSan Francisco AI safety rolesHybrid AI research positionsOpenAI alignment team applyTrustworthy AI researcherHigh stakes AI alignmentOpenAI PhD researcher jobsMachine learning alignment SFAI calibration training jobsResearch

Answer 10 quick questions to check your fit for Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now! @ OpenAI.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.