RESUME AND JOB
OpenAI
Join OpenAI's Alignment team as a Researcher, Alignment in San Francisco, California, and become a pivotal force in ensuring AI systems are safe, trustworthy, and aligned with human values. This senior-level role is perfect for PhD-level experts in AI safety, machine learning, and cognitive science who thrive in fast-paced, collaborative environments. At OpenAI, you'll tackle the most pressing challenges in AI alignment, developing methodologies that allow AI to robustly follow human intent—even in adversarial or high-stakes scenarios.
The Alignment team focuses on two core pillars: (1) scaling alignment techniques alongside growing AI capabilities, and (2) centering humans through intuitive interfaces for intent expression and oversight. As capabilities advance, your work will ensure our models remain reliable in complex real-world deployments. This hybrid position (3 days/week in-office) offers relocation assistance and positions you at the forefront of AI research that shapes humanity's future.
Ideal candidates are team players with strong engineering skills in PyTorch, experience in scalable oversight, and a passion for trustworthy AI. If you're ready to design experiments measuring subjective alignment risks, build robustness tools, and pioneer human-AI paradigms, this Researcher, Alignment job at OpenAI is your chance to make history.
As a Researcher, Alignment at OpenAI, your contributions will directly impact AI safety. Here's what you'll do daily:
These responsibilities demand versatility, from hands-on coding to theoretical research, in a dynamic San Francisco-based team.
To excel as a Researcher, Alignment at OpenAI, bring these qualifications:
Senior-level experience preferred; OpenAI seeks innovators ready to redefine AI safety.
OpenAI offers competitive compensation for Researcher, Alignment roles, estimated at $250,000–$450,000 USD yearly (including base, bonus, equity), reflecting San Francisco market rates for top AI talent. Total comp varies by experience.
Benefits include:
OpenAI is the vanguard of AGI development, committed to benefiting humanity. Our Alignment team leads in solving AI's toughest safety puzzles. Work with world-class researchers in San Francisco, influencing models deployed globally. Enjoy hybrid flexibility, top-tier perks, and the mission to make AI trustworthy. This Researcher, Alignment role offers unparalleled impact—join us to align the future.
Candidates rave about OpenAI's innovative culture, resources, and real-world difference. With rapid growth, your work scales instantly.
Ready for Researcher, Alignment at OpenAI? Submit your resume, PhD details, and alignment research portfolio via our careers page. Highlight PyTorch projects, scaling experiments, and safety contributions. Interviews include technical deep-dives and team fits. OpenAI is equal opportunity—diversity drives innovation. Apply now for San Francisco hybrid role!
262,500 - 495,000 USD / yearly
Source: ai estimated
* This is an estimated range based on market data and may vary based on experience and qualifications.
Get personalized recommendations to optimize your resume specifically for Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now!. Takes only 15 seconds!
Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.
Answer 10 quick questions to check your fit for Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now! @ OpenAI.

No related jobs found at the moment.

© 2026 Pointers. All rights reserved.

OpenAI
Join OpenAI's Alignment team as a Researcher, Alignment in San Francisco, California, and become a pivotal force in ensuring AI systems are safe, trustworthy, and aligned with human values. This senior-level role is perfect for PhD-level experts in AI safety, machine learning, and cognitive science who thrive in fast-paced, collaborative environments. At OpenAI, you'll tackle the most pressing challenges in AI alignment, developing methodologies that allow AI to robustly follow human intent—even in adversarial or high-stakes scenarios.
The Alignment team focuses on two core pillars: (1) scaling alignment techniques alongside growing AI capabilities, and (2) centering humans through intuitive interfaces for intent expression and oversight. As capabilities advance, your work will ensure our models remain reliable in complex real-world deployments. This hybrid position (3 days/week in-office) offers relocation assistance and positions you at the forefront of AI research that shapes humanity's future.
Ideal candidates are team players with strong engineering skills in PyTorch, experience in scalable oversight, and a passion for trustworthy AI. If you're ready to design experiments measuring subjective alignment risks, build robustness tools, and pioneer human-AI paradigms, this Researcher, Alignment job at OpenAI is your chance to make history.
As a Researcher, Alignment at OpenAI, your contributions will directly impact AI safety. Here's what you'll do daily:
These responsibilities demand versatility, from hands-on coding to theoretical research, in a dynamic San Francisco-based team.
To excel as a Researcher, Alignment at OpenAI, bring these qualifications:
Senior-level experience preferred; OpenAI seeks innovators ready to redefine AI safety.
OpenAI offers competitive compensation for Researcher, Alignment roles, estimated at $250,000–$450,000 USD yearly (including base, bonus, equity), reflecting San Francisco market rates for top AI talent. Total comp varies by experience.
Benefits include:
OpenAI is the vanguard of AGI development, committed to benefiting humanity. Our Alignment team leads in solving AI's toughest safety puzzles. Work with world-class researchers in San Francisco, influencing models deployed globally. Enjoy hybrid flexibility, top-tier perks, and the mission to make AI trustworthy. This Researcher, Alignment role offers unparalleled impact—join us to align the future.
Candidates rave about OpenAI's innovative culture, resources, and real-world difference. With rapid growth, your work scales instantly.
Ready for Researcher, Alignment at OpenAI? Submit your resume, PhD details, and alignment research portfolio via our careers page. Highlight PyTorch projects, scaling experiments, and safety contributions. Interviews include technical deep-dives and team fits. OpenAI is equal opportunity—diversity drives innovation. Apply now for San Francisco hybrid role!
262,500 - 495,000 USD / yearly
Source: ai estimated
* This is an estimated range based on market data and may vary based on experience and qualifications.
Get personalized recommendations to optimize your resume specifically for Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now!. Takes only 15 seconds!
Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.
Answer 10 quick questions to check your fit for Researcher, Alignment Careers at OpenAI - San Francisco, California | Apply Now! @ OpenAI.

No related jobs found at the moment.

© 2026 Pointers. All rights reserved.