Netflix is one of the world's leading entertainment services, with over 300 million paid memberships in over 190 countries enjoying TV series, films and games across a wide variety of genres and languages. Members can play, pause and resume watching as much as they want, anytime, anywhere, and can change their plans at any time.Machine Learning/Artificial Intelligence powers innovation in all areas of the business, from helping members choose the right title for them through personalization, to better understanding our audience and our content slate, to optimizing our payment processing and other revenue-focused initiatives. Building highly scalable and differentiated ML infrastructure is key to accelerating this innovation.The OpportunityWe are looking for a driven Software Engineer (L4/L5) to join our Machine Learning Platform (MLP) org. MLP’s charter is to maximize the business impact of all ML use cases at Netflix through highly reliable and flexible ML tooling and infrastructure that support personalization, studio algorithms, virtual production, growth intelligence, and content understanding. In this role, you will design and operate the systems that measure LLM quality, safety, and performance at scale—closing the loop from model development to production through rigorous, reproducible evaluation. In this role you will get to: Build the evaluation platform that runs large-scale LLM eval suites across modalities and tasks (e.g., content understanding, personalization prompts, assistant use cases), integrating with batch/online inference (including vLLM-based backends) and experiment tracking to deliver reliable, reproducible metrics. Operationalize benchmark coverage alongside Netflix-specific task suites and user-journey-grounded prompts; automate result collection, statistical analysis, and drift detection. Develop high-quality synthetic data and labeling pipelines to expand coverage, reduce bias, and continuously refresh eval corpora; codify data provenance and sampling policies. Partner deeply with model developers and platform teams to co-design APIs for submitting eval jobs, adding new tasks/metrics, and defining SLO-like quality thresholds that unblock launches while preventing regressions. Contribute beyond evaluation across the GenAI/FM stack when needed:Research workflows (orchestration, queueing/caching/failure isolation, artifact lineage, experiment management) that keep scientists productive at scale.Inference foundations (vLLM/TGI-class serving, routing, safety filters, latency/throughput tuning, cost controls) and batch evaluation/inference at production scale.Observability for the whole loop: dataset/version provenance, model/build metadata, metric lineage, and run reproducibility.Minimum Job QualificationsExperience in ML engineering on production systems dealing with training or inference of deep learning modelsProven track record of building and operating large-scale infrastructure for machine learning use casesExperience with cloud computing providers, preferably AWSComfortable with ambiguity and working across multiple layers of the tech stack to execute on both 0-to-1 and 1-to-100 projectsAdopt and promote best practices in operations, including observability, logging, reporting, and on-call processes to ensure engineering excellence.Excellent written and verbal communication skillsComfortable working in a team with peers and partners distributed across (US) geographies & time zones.Preferred QualificationsEnd-to-end foundation-model lifecycle exposure: pre-train checks, post-train regression, and pre-launch gates—understanding where and how evaluation fits.Built or contributed to an evaluation platform at scale (batch/online evals, multi-modal tasks, queueing, caching, failure isolation) with strong SLIs/SLOs.Experience building evaluation data pipelines (synthetic generation, labeling, sampling) with provenance and governancePlatform mindset: craft usable APIs/UX so modeling teams can submit tasks, compare runs, and gate launches with SLO-like thresholds. Bonus signals across the broader stack: experience with reinforcement learning, agent modeling, AI alignment, distributed training, vector search/feature stores, routing/safety middleware for serving, and cost/perf tuningWhat do we offer?Our compensation structure consists solely of an annual salary; we do not have bonuses. You choose each year how much of your compensation you want in salary versus stock options. To determine your personal top of market compensation, we rely on market indicators and consider your specific job family, background, skills, and experience to determine your compensation in the market range. The range for this role is $100,000 - $720,000.Netflix provides comprehensive benefits including Health Plans, Mental Health support, a 401(k) Retirement Plan with employer match, Stock Option Program, Disability Programs, Health Savings and Flexible Spending Accounts, Family-forming benefits, and Life and Serious Injury Benefits. We also offer paid leave of absence programs. Full-time hourly employees accrue 35 days annually for paid time off to be used for vacation, holidays, and sick paid time off. Full-time salaried employees are immediately entitled to flexible time off. See more detail about our Benefits here.Netflix is a unique culture and environment. Learn more here. Inclusion is a Netflix value and we strive to host a meaningful interview experience for all candidates. If you want an accommodation/adjustment for a disability or any other reason during the hiring process, please send a request to your recruiting partner.We are an equal-opportunity employer and celebrate diversity, recognizing that diversity builds stronger teams. We approach diversity and inclusion seriously and thoughtfully. We do not discriminate on the basis of race, religion, color, ancestry, national origin, caste, sex, sexual orientation, gender, gender identity or expression, age, disability, medical condition, pregnancy, genetic makeup, marital status, or military service.Job is open for no less than 7 days and will be removed when the position is filled.
Locations
USA (Remote)
Salary
100,000 - 720,000 USD / yearly
Estimated Salary Rangehigh confidence
250,000 - 450,000 USD / yearly
Source: ai estimated
* This is an estimated range based on market data and may vary based on experience and qualifications.
Skills Required
ML engineering on production systemsintermediate
training or inference of deep learning modelsintermediate
building and operating large-scale infrastructure for machine learning use casesintermediate
evaluation data pipelines (synthetic generation, labeling, sampling)intermediate
craft usable APIs/UXintermediate
reinforcement learningintermediate
agent modelingintermediate
AI alignmentintermediate
distributed trainingintermediate
vector search/feature storesintermediate
routing/safety middleware for servingintermediate
cost/perf tuningintermediate
Required Qualifications
Experience in ML engineering on production systems dealing with training or inference of deep learning models (experience)
Proven track record of building and operating large-scale infrastructure for machine learning use cases (experience)
Experience with cloud computing providers, preferably AWS (experience)
Comfortable with ambiguity and working across multiple layers of the tech stack to execute on both 0-to-1 and 1-to-100 projects (experience)
Adopt and promote best practices in operations, including observability, logging, reporting, and on-call processes to ensure engineering excellence. (experience)
Excellent written and verbal communication skills (experience)
Comfortable working in a team with peers and partners distributed across (US) geographies & time zones. (experience)
Preferred Qualifications
End-to-end foundation-model lifecycle exposure: pre-train checks, post-train regression, and pre-launch gates—understanding where and how evaluation fits. (experience)
Built or contributed to an evaluation platform at scale (batch/online evals, multi-modal tasks, queueing, caching, failure isolation) with strong SLIs/SLOs. (experience)
Experience building evaluation data pipelines (synthetic generation, labeling, sampling) with provenance and governance (experience)
Platform mindset: craft usable APIs/UX so modeling teams can submit tasks, compare runs, and gate launches with SLO-like thresholds. (experience)
Bonus signals across the broader stack: experience with reinforcement learning, agent modeling, AI alignment, distributed training, vector search/feature stores, routing/safety middleware for serving, and cost/perf tuning (experience)
Responsibilities
Build the evaluation platform that runs large-scale LLM eval suites across modalities and tasks (e.g., content understanding, personalization prompts, assistant use cases), integrating with batch/online inference (including vLLM-based backends) and experiment tracking to deliver reliable, reproducible metrics.
Operationalize benchmark coverage alongside Netflix-specific task suites and user-journey-grounded prompts; automate result collection, statistical analysis, and drift detection.
Develop high-quality synthetic data and labeling pipelines to expand coverage, reduce bias, and continuously refresh eval corpora; codify data provenance and sampling policies.
Partner deeply with model developers and platform teams to co-design APIs for submitting eval jobs, adding new tasks/metrics, and defining SLO-like quality thresholds that unblock launches while preventing regressions.
Contribute beyond evaluation across the GenAI/FM stack when needed:
Research workflows (orchestration, queueing/caching/failure isolation, artifact lineage, experiment management) that keep scientists productive at scale.
Inference foundations (vLLM/TGI-class serving, routing, safety filters, latency/throughput tuning, cost controls) and batch evaluation/inference at production scale.
Observability for the whole loop: dataset/version provenance, model/build metadata, metric lineage, and run reproducibility.
Benefits
general: Health Plans
general: Mental Health support
general: a 401(k) Retirement Plan with employer match
general: Stock Option Program
general: Disability Programs
general: Health Savings and Flexible Spending Accounts
general: Family-forming benefits
general: Life and Serious Injury Benefits
general: paid leave of absence programs
general: Full-time hourly employees accrue 35 days annually for paid time off to be used for vacation, holidays, and sick paid time off.
general: Full-time salaried employees are immediately entitled to flexible time off.
Target Your Resume for "Software Engineer L4/L5, LLM Evaluation & Infrastructure, Machine Learning Platform"
Get personalized recommendations to optimize your resume specifically for Software Engineer L4/L5, LLM Evaluation & Infrastructure, Machine Learning Platform. Our AI analyzes job requirements and tailors your resume to maximize your chances.
Keyword optimization
Skills matching
Experience alignment
Check Your ATS Score for "Software Engineer L4/L5, LLM Evaluation & Infrastructure, Machine Learning Platform"
Find out how well your resume matches this job's requirements. Our Applicant Tracking System (ATS) analyzer scores your resume based on keywords, skills, and format compatibility.