RESUME AND JOB
OpenAI
Join OpenAI's Safety Systems team as a Researcher, Robustness & Safety Training in San Francisco, California. This senior-level role is at the forefront of AI safety research, focusing on RLHF, adversarial training, and model robustness to ensure safe AGI deployment. Apply now for this high-impact position driving OpenAI's mission.
The Researcher, Robustness & Safety Training position at OpenAI in San Francisco is a pivotal role within the Model Safety Research team. This team is dedicated to advancing AI safety capabilities, ensuring that OpenAI's most powerful models can be deployed safely to benefit society. As AI systems grow more capable, new challenges emerge in enforcing safety policies, robustness against adversaries, privacy protection, and trustworthiness in critical domains.
In this role, you'll conduct cutting-edge research on topics like Reinforcement Learning from Human Feedback (RLHF), adversarial training, and robustness. You'll implement these innovations directly into OpenAI's core training pipelines and launch safety features in products used by millions. This isn't just research—it's about shaping the future of safe AGI, collaborating across teams to meet the highest safety standards, and making a tangible impact on humanity.
OpenAI's commitment to safety is unwavering. The Safety Systems team leads efforts to deploy models responsibly, learning from real-world use while mitigating risks. If you have 4+ years in AI safety, a PhD in ML or related fields, and a passion for aligned AGI, this San Francisco-based role offers unparalleled opportunities to influence global AI safety standards.
Key focus areas include balancing safety with helpfulness, defending against malicious actors, securing user privacy, and building trust in high-stakes applications like healthcare or autonomous systems. Your work will directly contribute to OpenAI's charter for universally beneficial AI.
As a Researcher in Robustness & Safety Training at OpenAI, your responsibilities will span research, implementation, and strategy:
These tasks position you at the intersection of research and deployment, ensuring OpenAI's AI benefits society safely.
To excel as a Researcher, Robustness & Safety Training in OpenAI's San Francisco office, you should possess:
Ideal candidates thrive in collaborative settings and are driven by the challenge of safe AI deployment.
OpenAI offers competitive compensation for this senior Researcher role in San Francisco, estimated at $320,000 - $500,000 USD yearly, including base salary, equity, and bonuses. Total compensation reflects experience and impact.
Benefits include:
This package supports top talent focused on AI safety innovation.
OpenAI is the pioneer in safe AGI development, with products like ChatGPT transforming the world. Joining the Safety Systems team in San Francisco means working with world-class researchers on problems that matter: making powerful AI safe for everyone.
Our culture emphasizes transparency, collaboration, and impact. You'll shape safety standards for future AI, publish influential work, and deploy changes affecting billions. San Francisco's vibrant tech ecosystem complements OpenAI's innovative environment.
With a mission to benefit humanity, OpenAI invests heavily in safety. This role offers intellectual challenge, massive scale, and the chance to define AGI safety. Past team members have advanced RLHF globally—your contributions could do the same.
OpenAI provides resources unmatched elsewhere: unlimited compute, expert peers, and direct mission alignment. If you're passionate about robust AI, this is your opportunity to lead.
Ready to advance AI safety at OpenAI? Submit your resume, cover letter highlighting AI safety experience, and links to publications or GitHub. Include why you're excited about OpenAI's mission.
Applications are reviewed on a rolling basis. Top candidates advance to research interviews, safety deep-dives, and team fits. We prioritize diverse perspectives aligned with our charter.
Apply now via OpenAI's careers page for the Researcher, Robustness & Safety Training role in San Francisco. Shape the future of safe AI today!
This page optimized for searches like 'OpenAI AI safety researcher jobs San Francisco', 'RLHF researcher careers', 'AGI safety training positions'. Updated 2024.
336,000 - 550,000 USD / yearly
Source: ai estimated
* This is an estimated range based on market data and may vary based on experience and qualifications.
Get personalized recommendations to optimize your resume specifically for Researcher, Robustness & Safety Training Careers at OpenAI - San Francisco, California | Apply Now!. Takes only 15 seconds!
Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.
Answer 10 quick questions to check your fit for Researcher, Robustness & Safety Training Careers at OpenAI - San Francisco, California | Apply Now! @ OpenAI.

No related jobs found at the moment.

© 2026 Pointers. All rights reserved.

OpenAI
Join OpenAI's Safety Systems team as a Researcher, Robustness & Safety Training in San Francisco, California. This senior-level role is at the forefront of AI safety research, focusing on RLHF, adversarial training, and model robustness to ensure safe AGI deployment. Apply now for this high-impact position driving OpenAI's mission.
The Researcher, Robustness & Safety Training position at OpenAI in San Francisco is a pivotal role within the Model Safety Research team. This team is dedicated to advancing AI safety capabilities, ensuring that OpenAI's most powerful models can be deployed safely to benefit society. As AI systems grow more capable, new challenges emerge in enforcing safety policies, robustness against adversaries, privacy protection, and trustworthiness in critical domains.
In this role, you'll conduct cutting-edge research on topics like Reinforcement Learning from Human Feedback (RLHF), adversarial training, and robustness. You'll implement these innovations directly into OpenAI's core training pipelines and launch safety features in products used by millions. This isn't just research—it's about shaping the future of safe AGI, collaborating across teams to meet the highest safety standards, and making a tangible impact on humanity.
OpenAI's commitment to safety is unwavering. The Safety Systems team leads efforts to deploy models responsibly, learning from real-world use while mitigating risks. If you have 4+ years in AI safety, a PhD in ML or related fields, and a passion for aligned AGI, this San Francisco-based role offers unparalleled opportunities to influence global AI safety standards.
Key focus areas include balancing safety with helpfulness, defending against malicious actors, securing user privacy, and building trust in high-stakes applications like healthcare or autonomous systems. Your work will directly contribute to OpenAI's charter for universally beneficial AI.
As a Researcher in Robustness & Safety Training at OpenAI, your responsibilities will span research, implementation, and strategy:
These tasks position you at the intersection of research and deployment, ensuring OpenAI's AI benefits society safely.
To excel as a Researcher, Robustness & Safety Training in OpenAI's San Francisco office, you should possess:
Ideal candidates thrive in collaborative settings and are driven by the challenge of safe AI deployment.
OpenAI offers competitive compensation for this senior Researcher role in San Francisco, estimated at $320,000 - $500,000 USD yearly, including base salary, equity, and bonuses. Total compensation reflects experience and impact.
Benefits include:
This package supports top talent focused on AI safety innovation.
OpenAI is the pioneer in safe AGI development, with products like ChatGPT transforming the world. Joining the Safety Systems team in San Francisco means working with world-class researchers on problems that matter: making powerful AI safe for everyone.
Our culture emphasizes transparency, collaboration, and impact. You'll shape safety standards for future AI, publish influential work, and deploy changes affecting billions. San Francisco's vibrant tech ecosystem complements OpenAI's innovative environment.
With a mission to benefit humanity, OpenAI invests heavily in safety. This role offers intellectual challenge, massive scale, and the chance to define AGI safety. Past team members have advanced RLHF globally—your contributions could do the same.
OpenAI provides resources unmatched elsewhere: unlimited compute, expert peers, and direct mission alignment. If you're passionate about robust AI, this is your opportunity to lead.
Ready to advance AI safety at OpenAI? Submit your resume, cover letter highlighting AI safety experience, and links to publications or GitHub. Include why you're excited about OpenAI's mission.
Applications are reviewed on a rolling basis. Top candidates advance to research interviews, safety deep-dives, and team fits. We prioritize diverse perspectives aligned with our charter.
Apply now via OpenAI's careers page for the Researcher, Robustness & Safety Training role in San Francisco. Shape the future of safe AI today!
This page optimized for searches like 'OpenAI AI safety researcher jobs San Francisco', 'RLHF researcher careers', 'AGI safety training positions'. Updated 2024.
336,000 - 550,000 USD / yearly
Source: ai estimated
* This is an estimated range based on market data and may vary based on experience and qualifications.
Get personalized recommendations to optimize your resume specifically for Researcher, Robustness & Safety Training Careers at OpenAI - San Francisco, California | Apply Now!. Takes only 15 seconds!
Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.
Answer 10 quick questions to check your fit for Researcher, Robustness & Safety Training Careers at OpenAI - San Francisco, California | Apply Now! @ OpenAI.

No related jobs found at the moment.

© 2026 Pointers. All rights reserved.