AI Research Scientist - Machine Learning for Behavioral AI

Meta

full-time

Posted: October 10, 2025

Number of Vacancies: 1

Job Description

Meta’s Reality Labs Research (RL-R) brings together a world-class team of researchers, developers, and engineers to create the future of Mixed Reality (MR), Augmented Reality (AR), and Wearable Artificial Intelligence (AI). Within RL-R, the ACE team solves complex challenges in behavioral inference from sparse information. We leverage multimodal, egocentric data and cutting edge machine learning to deliver robust, efficient models that serve everyone. Our research provides core building blocks to unlock intuitive and helpful Wearable AI, empowering everyone to harness the superpowers of this emerging technology in their daily lives. We are looking for an experienced AI Research Scientist to join us in this initiative. In this role, you will work closely with Research Scientists and Engineers from across RL-R to develop novel state-of-the-art AI algorithms to infer human behavior patterns, with an emphasis on those that inform attention, cognition or emotion. Examples include longitudinal gaze behaviors, gestures, or vocal cues. You will develop end-to-end wearable AI experiential validation platforms using cutting edge generative AI and language models to validate the impact of these signals. Further, you will work with system engineers to optimize these models for efficiency and latency in constrained compute platforms. You will learn constantly, dive into new areas with unfamiliar technologies, and embrace the ambiguity of AR, VR, and AI problem solving. Together, we are going to build cutting-edge prototypes, technologies, and toolsets that can define a paradigm shift in how we interact with our surroundings.

Locations

  • Redmond, WA, US

Salary

Salary not disclosed

Skills Required

  • Pythonintermediate
  • PyTorchintermediate
  • ML computer visionintermediate
  • Multimodal sensing platformsintermediate
  • Data collectionintermediate
  • Multimodal signal processing and analysisintermediate
  • C++intermediate
  • Biosignals, behavioral signals, or egocentric data from wearable sensorsadvanced
  • Large Language Modelsintermediate
  • Non-ML computer vision (OpenCV)intermediate
  • Multimodal Deep Learningadvanced
  • Augmented Reality/Virtual Realityintermediate
  • 3D engines (Unreal or Unity)intermediate
  • Software development in research environmentintermediate

Required Qualifications

  • Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience (degree)
  • PhD degree in Computer Science, Human-Computer Interaction, or related field (degree)
  • 2+ years of experience (experience)
  • Proven track record of solving complex challenges with multimodal ML as demonstrated through grants, fellowships, patents, or publications at conferences like CVPR, NeurIPS, CHI, or equivalent (experience)
  • 3+ years of experience with Python (experience)
  • Experience with a common machine learning framework like PyTorch (experience)
  • Experience with ML computer vision (experience)
  • Experience with multimodal sensing platforms, data collection, multimodal signal processing and analysis, and converting raw sensor streams into robust models solving complex tasks (experience)
  • PhD degree in Computer Science or related field plus 3+ years experience (experience)

Responsibilities

  • Using data from wearable devices, employ state-of-the-art AI algorithms to infer human behavior patterns that inform attention, cognition or emotion. Examples include longitudinal gaze behaviors, gestures, or vocal cues.
  • Develop data collection strategies, benchmarks, and metrics to validate and improve efficiency, scalability, and stability of these models.
  • Collaborate with researchers and engineers across diverse disciplines through all stages of project development to identify gaps, build solutions, and transfer technology.
  • Contribute to research that can eventually be applied to Meta products and services and run on billions of media items every day.
  • Create tools, infrastructure, and documentation to accelerate research.
  • Perform code reviews that improve software engineering quality.
  • Learn constantly, dive into new areas with unfamiliar technologies, and embrace the ambiguity of Augmented Reality/Virtual Reality problem solving.

Benefits

  • bonus: Bonus included in compensation
  • equity: Equity included in compensation
  • general: Benefits offered by Meta (details available on Meta's benefits page)

Documents

PrivacyTerms & ConditionsAbout UsRefund PolicyRecruiter Login

© 2025 Pro Partners. All rights reserved.