Resume and JobRESUME AND JOB
OpenAI logo

Researcher, Trustworthy AI Careers at OpenAI - San Francisco, California | Apply Now!

OpenAI

Researcher, Trustworthy AI Careers at OpenAI - San Francisco, California | Apply Now!

full-timePosted: Feb 10, 2026

Job Description

Researcher, Trustworthy AI Careers at OpenAI - San Francisco, CA

Join OpenAI's Safety Systems team as a Researcher, Trustworthy AI and shape the future of safe AGI deployment. This senior-level role in San Francisco offers unparalleled impact on AI safety research.

Role Overview

The Researcher, Trustworthy AI position at OpenAI represents a rare opportunity to work at the intersection of cutting-edge AI research and societal impact. Based in our San Francisco headquarters, you'll join the Safety Systems team, which leads OpenAI's commitment to deploying safe AGI that benefits all of humanity.

The Trustworthy AI team focuses on action-relevant research that bridges nebulous policy challenges with technically tractable solutions. Your work will directly influence model design, public input mechanisms, external assurances, and deployment risk mitigation for OpenAI's most advanced systems.

This role demands exceptional research scientists/engineers who can translate complex societal concerns into measurable technical interventions. With relocation assistance provided, this is your chance to contribute to AGI safety at the world's leading AI organization.

Key Responsibilities

  1. Develop research strategies studying societal impacts of OpenAI models in action-relevant ways that inform model design
  2. Create innovative methods enabling public input into AI model values and alignment processes
  3. Design and execute experiments incorporating diverse societal perspectives into technical AI development
  4. Enhance external AI safety assurances by converting findings into robust, reproducible evaluations
  5. Facilitate rapid de-risking processes for flagship model deployments
  6. Bridge policy problems with technical solutions through interdisciplinary research approaches
  7. Analyze anthropomorphism effects and develop mitigation strategies for AI systems
  8. Build technical frameworks measuring societal readiness for increasingly capable AI
  9. Collaborate across Safety Systems, policy, and engineering teams on full-stack AI challenges
  10. Contribute to external validation layers strengthening independent AI safety checks
  11. Leverage large-scale multimodal datasets for trustworthy AI research
  12. Drive rigor in AI safety evaluations including RLHF, adversarial training, and robustness testing
  13. Publish and present findings advancing the field of trustworthy AI research

Qualifications

Technical Expertise: 3+ years research experience with proficiency in Python/ML frameworks. Deep knowledge of AI safety techniques (RLHF, adversarial training, LLM evaluations, robustness).

Research Excellence: Proven ability working with large-scale AI systems and multimodal datasets. Experience tackling ambiguous, high-impact problems in well-resourced environments.

Interdisciplinary Skills: Background in socio-technical research combining AI capabilities with policy/societal considerations. Passion for translating nebulous challenges into measurable outcomes.

Mission Alignment: Genuine excitement for OpenAI's charter building safe, universally beneficial AGI. Thrives in fast-paced, collaborative research culture focused on real-world AI deployment safety.

Salary & Benefits

Competitive Compensation: Total compensation range of $275,000 - $425,000 USD annually, including base salary, equity, and performance bonuses. Exact compensation determined by experience and qualifications.

Comprehensive Benefits Package:

  • Health, dental, vision insurance with premium coverage
  • Relocation assistance package for SF move
  • Unlimited PTO policy
  • Generous parental leave (16+ weeks)
  • Professional development budget
  • Onsite fitness center and wellness programs
  • Catered meals daily
  • 401(k) with company match
  • Commuter benefits

OpenAI offers one of the most competitive compensation packages in AI research, reflecting our commitment to attracting top global talent.

Why Join OpenAI?

OpenAI leads the world in developing safe AGI that benefits humanity. Our Safety Systems team operates at the forefront of AI safety research, directly influencing model deployments serving millions worldwide.

Unparalleled Impact: Your research directly shapes flagship model releases and industry safety standards.

World-Class Resources: Access to frontier AI models, massive compute clusters, and top researchers across AI domains.

Mission-Driven Culture: Work alongside brilliant minds united by our charter for safe AGI benefiting all humanity.

Career Growth: Rapid professional development through high-impact projects and expert mentorship.

Recent team achievements include pioneering RLHF techniques powering ChatGPT safety, developing novel external assurance frameworks, and establishing public input mechanisms for model alignment.

How to Apply

Ready to advance trustworthy AI at OpenAI? Submit your application including:

  • Resume/CV highlighting relevant research experience
  • Research statement (2-3 pages) on AI safety/societal impact work
  • 2-3 references from recent collaborators
  • Links to relevant publications/projects

Application Process:

  1. Online application review (1-2 weeks)
  2. Technical research interview
  3. Safety research deep-dive
  4. Team fit conversations
  5. Offer & relocation support

OpenAI is committed to diversity and equal opportunity. We encourage applications from all backgrounds passionate about safe AI development.

Apply now to join the mission shaping humanity's AI future!

Keywords: OpenAI AI safety jobs, trustworthy AI researcher, AGI safety careers San Francisco, RLHF research positions, AI policy research OpenAI, AI safety systems engineer.

Locations

  • San Francisco, California, United States

Salary

Estimated Salary Rangehigh confidence

288,750 - 467,500 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • AI Safety Researchintermediate
  • Reinforcement Learning from Human Feedback (RLHF)intermediate
  • Adversarial Trainingintermediate
  • Model Robustnessintermediate
  • LLM Evaluationsintermediate
  • Python Programmingintermediate
  • Machine Learningintermediate
  • Interdisciplinary Researchintermediate
  • Socio-Technical Analysisintermediate
  • Experimental Designintermediate
  • Multimodal Datasetsintermediate
  • Policy Analysisintermediate
  • Public Input Mechanismsintermediate
  • External AI Assurancesintermediate
  • AGI Safety Strategiesintermediate
  • Large-Scale AI Systemsintermediate
  • Data Analysisintermediate
  • Statistical Modelingintermediate
  • Ethical AI Developmentintermediate
  • Risk Assessmentintermediate

Required Qualifications

  • 3+ years of research experience in industry or academia (experience)
  • Proficiency in Python or similar programming languages (experience)
  • Strong background in AI safety topics including RLHF, adversarial training, robustness, and LLM evaluations (experience)
  • Experience working with large-scale AI systems and multimodal datasets (experience)
  • Proven track record in interdisciplinary research combining technical and policy domains (experience)
  • Passion for socio-technical topics and societal impacts of AI (experience)
  • Alignment with OpenAI's mission to build safe, universally beneficial AGI (experience)
  • Ability to tackle large-scale, difficult, and nebulous problems (experience)
  • Experience in experimental design and running AI safety experiments (experience)
  • Familiarity with translating policy problems into technical metrics (experience)
  • Strong communication skills for collaborating across technical and policy teams (experience)
  • Thrives in well-resourced, fast-paced research environments (experience)

Responsibilities

  • Set research strategies to study societal impacts of AI models in action-relevant ways
  • Tie societal impact research back into model design and development processes
  • Build creative methods for enabling public input into AI model values
  • Design and run experiments to incorporate diverse public perspectives into AI alignment
  • Increase rigor of external assurances by developing robust evaluation frameworks
  • Transform external findings into actionable, measurable AI safety evaluations
  • Facilitate timely de-risking of flagship model deployments
  • Collaborate with policy and safety teams to address full-stack AI challenges
  • Develop technical methods for measuring anthropomorphism impacts on AI systems
  • Create interventions to increase societal readiness for advanced AI systems
  • Work on external validation layers for AI safety assurances
  • Conduct interdisciplinary research bridging AI technical capabilities and societal needs
  • Analyze and mitigate risks associated with deploying powerful AI models
  • Contribute to OpenAI's culture of trust, transparency, and AI safety

Benefits

  • general: Competitive salary with equity compensation package
  • general: Comprehensive health, dental, and vision insurance
  • general: Relocation assistance to San Francisco HQ
  • general: Generous parental leave policy
  • general: Unlimited PTO with encouragement to disconnect
  • general: Mental health and wellness benefits
  • general: Professional development stipend
  • general: Onsite fitness facilities and gym membership
  • general: Catered meals and fully stocked kitchens
  • general: Commuter benefits and parking subsidies
  • general: 401(k) matching program
  • general: Employee assistance programs
  • general: Learning and development opportunities
  • general: Impactful work on frontier AI safety research
  • general: Collaborative, mission-driven culture

Target Your Resume for "Researcher, Trustworthy AI Careers at OpenAI - San Francisco, California | Apply Now!" , OpenAI

Get personalized recommendations to optimize your resume specifically for Researcher, Trustworthy AI Careers at OpenAI - San Francisco, California | Apply Now!. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Researcher, Trustworthy AI Careers at OpenAI - San Francisco, California | Apply Now!" , OpenAI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

OpenAI trustworthy AI researcherAI safety research jobs San FranciscoAGI safety careers OpenAIRLHF researcher positionsAI policy research OpenAITrustworthy AI jobs CaliforniaOpenAI safety systems teamAI societal impact researcherExternal AI assurances researchPublic input AI model valuesAI safety Python researcherInterdisciplinary AI safety jobsLLM evaluation researcher OpenAIAdversarial training AI safetyModel robustness researcherSan Francisco AI safety careersOpenAI researcher trustworthy AIAI deployment risk researcherSocio-technical AI researchAdvanced AI safety positionsOpenAI AGI safety researcherMultimodal AI safety researchSafety Systems

Answer 10 quick questions to check your fit for Researcher, Trustworthy AI Careers at OpenAI - San Francisco, California | Apply Now! @ OpenAI.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

OpenAI logo

Researcher, Trustworthy AI Careers at OpenAI - San Francisco, California | Apply Now!

OpenAI

Researcher, Trustworthy AI Careers at OpenAI - San Francisco, California | Apply Now!

full-timePosted: Feb 10, 2026

Job Description

Researcher, Trustworthy AI Careers at OpenAI - San Francisco, CA

Join OpenAI's Safety Systems team as a Researcher, Trustworthy AI and shape the future of safe AGI deployment. This senior-level role in San Francisco offers unparalleled impact on AI safety research.

Role Overview

The Researcher, Trustworthy AI position at OpenAI represents a rare opportunity to work at the intersection of cutting-edge AI research and societal impact. Based in our San Francisco headquarters, you'll join the Safety Systems team, which leads OpenAI's commitment to deploying safe AGI that benefits all of humanity.

The Trustworthy AI team focuses on action-relevant research that bridges nebulous policy challenges with technically tractable solutions. Your work will directly influence model design, public input mechanisms, external assurances, and deployment risk mitigation for OpenAI's most advanced systems.

This role demands exceptional research scientists/engineers who can translate complex societal concerns into measurable technical interventions. With relocation assistance provided, this is your chance to contribute to AGI safety at the world's leading AI organization.

Key Responsibilities

  1. Develop research strategies studying societal impacts of OpenAI models in action-relevant ways that inform model design
  2. Create innovative methods enabling public input into AI model values and alignment processes
  3. Design and execute experiments incorporating diverse societal perspectives into technical AI development
  4. Enhance external AI safety assurances by converting findings into robust, reproducible evaluations
  5. Facilitate rapid de-risking processes for flagship model deployments
  6. Bridge policy problems with technical solutions through interdisciplinary research approaches
  7. Analyze anthropomorphism effects and develop mitigation strategies for AI systems
  8. Build technical frameworks measuring societal readiness for increasingly capable AI
  9. Collaborate across Safety Systems, policy, and engineering teams on full-stack AI challenges
  10. Contribute to external validation layers strengthening independent AI safety checks
  11. Leverage large-scale multimodal datasets for trustworthy AI research
  12. Drive rigor in AI safety evaluations including RLHF, adversarial training, and robustness testing
  13. Publish and present findings advancing the field of trustworthy AI research

Qualifications

Technical Expertise: 3+ years research experience with proficiency in Python/ML frameworks. Deep knowledge of AI safety techniques (RLHF, adversarial training, LLM evaluations, robustness).

Research Excellence: Proven ability working with large-scale AI systems and multimodal datasets. Experience tackling ambiguous, high-impact problems in well-resourced environments.

Interdisciplinary Skills: Background in socio-technical research combining AI capabilities with policy/societal considerations. Passion for translating nebulous challenges into measurable outcomes.

Mission Alignment: Genuine excitement for OpenAI's charter building safe, universally beneficial AGI. Thrives in fast-paced, collaborative research culture focused on real-world AI deployment safety.

Salary & Benefits

Competitive Compensation: Total compensation range of $275,000 - $425,000 USD annually, including base salary, equity, and performance bonuses. Exact compensation determined by experience and qualifications.

Comprehensive Benefits Package:

  • Health, dental, vision insurance with premium coverage
  • Relocation assistance package for SF move
  • Unlimited PTO policy
  • Generous parental leave (16+ weeks)
  • Professional development budget
  • Onsite fitness center and wellness programs
  • Catered meals daily
  • 401(k) with company match
  • Commuter benefits

OpenAI offers one of the most competitive compensation packages in AI research, reflecting our commitment to attracting top global talent.

Why Join OpenAI?

OpenAI leads the world in developing safe AGI that benefits humanity. Our Safety Systems team operates at the forefront of AI safety research, directly influencing model deployments serving millions worldwide.

Unparalleled Impact: Your research directly shapes flagship model releases and industry safety standards.

World-Class Resources: Access to frontier AI models, massive compute clusters, and top researchers across AI domains.

Mission-Driven Culture: Work alongside brilliant minds united by our charter for safe AGI benefiting all humanity.

Career Growth: Rapid professional development through high-impact projects and expert mentorship.

Recent team achievements include pioneering RLHF techniques powering ChatGPT safety, developing novel external assurance frameworks, and establishing public input mechanisms for model alignment.

How to Apply

Ready to advance trustworthy AI at OpenAI? Submit your application including:

  • Resume/CV highlighting relevant research experience
  • Research statement (2-3 pages) on AI safety/societal impact work
  • 2-3 references from recent collaborators
  • Links to relevant publications/projects

Application Process:

  1. Online application review (1-2 weeks)
  2. Technical research interview
  3. Safety research deep-dive
  4. Team fit conversations
  5. Offer & relocation support

OpenAI is committed to diversity and equal opportunity. We encourage applications from all backgrounds passionate about safe AI development.

Apply now to join the mission shaping humanity's AI future!

Keywords: OpenAI AI safety jobs, trustworthy AI researcher, AGI safety careers San Francisco, RLHF research positions, AI policy research OpenAI, AI safety systems engineer.

Locations

  • San Francisco, California, United States

Salary

Estimated Salary Rangehigh confidence

288,750 - 467,500 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • AI Safety Researchintermediate
  • Reinforcement Learning from Human Feedback (RLHF)intermediate
  • Adversarial Trainingintermediate
  • Model Robustnessintermediate
  • LLM Evaluationsintermediate
  • Python Programmingintermediate
  • Machine Learningintermediate
  • Interdisciplinary Researchintermediate
  • Socio-Technical Analysisintermediate
  • Experimental Designintermediate
  • Multimodal Datasetsintermediate
  • Policy Analysisintermediate
  • Public Input Mechanismsintermediate
  • External AI Assurancesintermediate
  • AGI Safety Strategiesintermediate
  • Large-Scale AI Systemsintermediate
  • Data Analysisintermediate
  • Statistical Modelingintermediate
  • Ethical AI Developmentintermediate
  • Risk Assessmentintermediate

Required Qualifications

  • 3+ years of research experience in industry or academia (experience)
  • Proficiency in Python or similar programming languages (experience)
  • Strong background in AI safety topics including RLHF, adversarial training, robustness, and LLM evaluations (experience)
  • Experience working with large-scale AI systems and multimodal datasets (experience)
  • Proven track record in interdisciplinary research combining technical and policy domains (experience)
  • Passion for socio-technical topics and societal impacts of AI (experience)
  • Alignment with OpenAI's mission to build safe, universally beneficial AGI (experience)
  • Ability to tackle large-scale, difficult, and nebulous problems (experience)
  • Experience in experimental design and running AI safety experiments (experience)
  • Familiarity with translating policy problems into technical metrics (experience)
  • Strong communication skills for collaborating across technical and policy teams (experience)
  • Thrives in well-resourced, fast-paced research environments (experience)

Responsibilities

  • Set research strategies to study societal impacts of AI models in action-relevant ways
  • Tie societal impact research back into model design and development processes
  • Build creative methods for enabling public input into AI model values
  • Design and run experiments to incorporate diverse public perspectives into AI alignment
  • Increase rigor of external assurances by developing robust evaluation frameworks
  • Transform external findings into actionable, measurable AI safety evaluations
  • Facilitate timely de-risking of flagship model deployments
  • Collaborate with policy and safety teams to address full-stack AI challenges
  • Develop technical methods for measuring anthropomorphism impacts on AI systems
  • Create interventions to increase societal readiness for advanced AI systems
  • Work on external validation layers for AI safety assurances
  • Conduct interdisciplinary research bridging AI technical capabilities and societal needs
  • Analyze and mitigate risks associated with deploying powerful AI models
  • Contribute to OpenAI's culture of trust, transparency, and AI safety

Benefits

  • general: Competitive salary with equity compensation package
  • general: Comprehensive health, dental, and vision insurance
  • general: Relocation assistance to San Francisco HQ
  • general: Generous parental leave policy
  • general: Unlimited PTO with encouragement to disconnect
  • general: Mental health and wellness benefits
  • general: Professional development stipend
  • general: Onsite fitness facilities and gym membership
  • general: Catered meals and fully stocked kitchens
  • general: Commuter benefits and parking subsidies
  • general: 401(k) matching program
  • general: Employee assistance programs
  • general: Learning and development opportunities
  • general: Impactful work on frontier AI safety research
  • general: Collaborative, mission-driven culture

Target Your Resume for "Researcher, Trustworthy AI Careers at OpenAI - San Francisco, California | Apply Now!" , OpenAI

Get personalized recommendations to optimize your resume specifically for Researcher, Trustworthy AI Careers at OpenAI - San Francisco, California | Apply Now!. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Researcher, Trustworthy AI Careers at OpenAI - San Francisco, California | Apply Now!" , OpenAI

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

OpenAI trustworthy AI researcherAI safety research jobs San FranciscoAGI safety careers OpenAIRLHF researcher positionsAI policy research OpenAITrustworthy AI jobs CaliforniaOpenAI safety systems teamAI societal impact researcherExternal AI assurances researchPublic input AI model valuesAI safety Python researcherInterdisciplinary AI safety jobsLLM evaluation researcher OpenAIAdversarial training AI safetyModel robustness researcherSan Francisco AI safety careersOpenAI researcher trustworthy AIAI deployment risk researcherSocio-technical AI researchAdvanced AI safety positionsOpenAI AGI safety researcherMultimodal AI safety researchSafety Systems

Answer 10 quick questions to check your fit for Researcher, Trustworthy AI Careers at OpenAI - San Francisco, California | Apply Now! @ OpenAI.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.