Resume and JobRESUME AND JOB
Tencent logo

AGI Model Architect / Research Scientist in AGI Model Architecture

Tencent

AGI Model Architect / Research Scientist in AGI Model Architecture

Tencent logo

Tencent

full-time

Posted: November 8, 2025

Number of Vacancies: 1

Job Description

AGI Model Architect / Research Scientist in AGI Model Architecture

📋 Job Overview

Tencent is seeking an AGI Model Architect / Research Scientist to build core architectures for Artificial General Intelligence systems that match or surpass human-level capabilities. The role involves developing large-scale models with multimodal perception, autonomous learning, and reasoning abilities, focusing on generalization to real-world applications. The goal is to design native multimodal systems capable of understanding and generating across vision, speech, and text, while interacting deeply with the environment to advance from AGI to ASI.

📍 Location: Bellevue, Washington, United States

🏢 Business Unit: TEG

📄 Full Description

Business Unit

What the Role Entails
Job Overview:
We are committed to building the core architecture for Artificial General Intelligence (AGI) systems that match or surpass human-level capabilities. As a key contributor to our core R&D team, you will help develop large-scale models with multimodal perception, autonomous learning, and reasoning abilities, driving their generalization to real-world applications. Our goal is to design a native multimodal system—capable of understanding and generating across vision, speech, and text—while interacting deeply with the environment to catalyze the transition from AGI to ASI (Artificial Super Intelligence).Responsibilities:
Design unified large model architectures with integrated capabilities in multimodal perception, reasoning, memory, and generation (across vision/audio/text).
Build systems that support continual learning, hierarchical memory, autonomous exploration, and self-evolution.
Advance the development of agent-based systems with autonomous task planning, cross-modal interaction, tool usage, and self-improvement capabilities.
Contribute deeply to the design of core components such as general representation learning, synchronized audio-visual modeling, world models, and sparse modeling.
Key Research Areas:
Multimodal Unified Architecture: Native co-frequency modeling and cross-modal reasoning across vision, speech, and language.
Continual Learning & Memory Mechanisms: Architectures that separate long-term memory from the core model to enable memory recall and task transfer.
World Modeling & Causal Reasoning: Enabling models to predict environmental states, plan behaviors, and update cognitive structures dynamically.
Sparse & Modular Architectures: Scalable, efficient, and interpretable ultra-large sparse model design.
Self-Evolution & Active Data Generation: Mechanisms for self-growth through reinforcement learning, self-supervision, and environment interaction.
Cross-Modal Understanding & Generation: Strengthening joint generation and decision-making capabilities in real-world physical environments.
Intelligent Agent Capability Transfer: Systematic enhancement of task generalization and tool-composition skills.
 

Who We Look For
Requirements:
Expertise in Transformer-based architectures and their applications in language and multimodal domains.
Hands-on experience in building or optimizing billion-scale models; familiar with training paradigms such as SFT (Supervised Fine-tuning), RLHF (Reinforcement Learning with Human Feedback), and self-supervised learning.
Preferred qualifications include deep understanding or practical experience in one or more of the following areas:
Multimodal models (e.g., vision-language models, audio-video models)
Reinforcement learning and autonomous agent systems
Complex reasoning and planning (e.g., search + LLMs, world modeling)
Sparse modeling and dynamic routing mechanisms
Strong engineering and system thinking capabilities, with the ability to translate cutting-edge research into production-level AGI model systems.
Publications in top-tier conferences/journals such as NeurIPS, ICLR, CVPR, ACL, etc., are highly desirable.
Location State(s)
US-Washington-Bellevue
The expected base pay range for this position in the location(s) listed above is $134,900.00 to $253,400.00 per year. Actual pay may vary depending on job-related knowledge, skills, and experience.
Employees hired for this position may be eligible for a sign on payment, relocation package, and restricted stock units, which will be evaluated on a case-by-case basis.
Subject to the terms and conditions of the plans in effect, hired applicants are also eligible for medical, dental, vision, life and disability benefits, and participation in the Company’s 401(k) plan. The Employee is also eligible for up to 15 to 25 days of vacation per year (depending on the employee’s tenure), up to 13 days of holidays throughout the calendar year, and up to 10 days of paid sick leave per year.
Your benefits may be adjusted to reflect your location, employment status, duration of employment with the company, and position level. Benefits may also be pro-rated for those who start working during the calendar year.

Equal Employment Opportunity at Tencent
As an equal opportunity employer, we firmly believe that diverse voices fuel our innovation and allow us to better serve our users and the community. We foster an environment where every employee of Tencent feels supported and inspired to achieve individual and common goals.
Work Location: US-Washington-Bellevue

🎯 Key Responsibilities

  • Design unified large model architectures with integrated capabilities in multimodal perception, reasoning, memory, and generation (across vision/audio/text).
  • Build systems that support continual learning, hierarchical memory, autonomous exploration, and self-evolution.
  • Advance the development of agent-based systems with autonomous task planning, cross-modal interaction, tool usage, and self-improvement capabilities.
  • Contribute deeply to the design of core components such as general representation learning, synchronized audio-visual modeling, world models, and sparse modeling.

✅ Required Qualifications

  • Expertise in Transformer-based architectures and their applications in language and multimodal domains.
  • Hands-on experience in building or optimizing billion-scale models; familiar with training paradigms such as SFT (Supervised Fine-tuning), RLHF (Reinforcement Learning with Human Feedback), and self-supervised learning.

⭐ Preferred Qualifications

  • Deep understanding or practical experience in multimodal models (e.g., vision-language models, audio-video models).
  • Deep understanding or practical experience in reinforcement learning and autonomous agent systems.
  • Deep understanding or practical experience in complex reasoning and planning (e.g., search + LLMs, world modeling).
  • Deep understanding or practical experience in sparse modeling and dynamic routing mechanisms.
  • Publications in top-tier conferences/journals such as NeurIPS, ICLR, CVPR, ACL, etc.

🛠️ Required Skills

  • Expertise in Transformer-based architectures.
  • Experience with billion-scale models and training paradigms (SFT, RLHF, self-supervised learning).
  • Strong engineering and system thinking capabilities.
  • Ability to translate cutting-edge research into production-level AGI model systems.

🎁 Benefits

  • Medical, dental, vision, life and disability benefits.
  • Participation in the Company’s 401(k) plan.
  • Up to 15 to 25 days of vacation per year (depending on tenure).
  • Up to 13 days of holidays throughout the calendar year.
  • Up to 10 days of paid sick leave per year.
  • Eligibility for sign-on payment, relocation package, and restricted stock units (evaluated case-by-case).

Locations

  • Bellevue, Washington, United States

Salary

134,900 - 253,400 USD / yearly

Skills Required

  • Expertise in Transformer-based architectures.intermediate
  • Experience with billion-scale models and training paradigms (SFT, RLHF, self-supervised learning).intermediate
  • Strong engineering and system thinking capabilities.intermediate
  • Ability to translate cutting-edge research into production-level AGI model systems.intermediate

Required Qualifications

  • Expertise in Transformer-based architectures and their applications in language and multimodal domains. (experience)
  • Hands-on experience in building or optimizing billion-scale models; familiar with training paradigms such as SFT (Supervised Fine-tuning), RLHF (Reinforcement Learning with Human Feedback), and self-supervised learning. (experience)

Preferred Qualifications

  • Deep understanding or practical experience in multimodal models (e.g., vision-language models, audio-video models). (experience)
  • Deep understanding or practical experience in reinforcement learning and autonomous agent systems. (experience)
  • Deep understanding or practical experience in complex reasoning and planning (e.g., search + LLMs, world modeling). (experience)
  • Deep understanding or practical experience in sparse modeling and dynamic routing mechanisms. (experience)
  • Publications in top-tier conferences/journals such as NeurIPS, ICLR, CVPR, ACL, etc. (experience)

Responsibilities

  • Design unified large model architectures with integrated capabilities in multimodal perception, reasoning, memory, and generation (across vision/audio/text).
  • Build systems that support continual learning, hierarchical memory, autonomous exploration, and self-evolution.
  • Advance the development of agent-based systems with autonomous task planning, cross-modal interaction, tool usage, and self-improvement capabilities.
  • Contribute deeply to the design of core components such as general representation learning, synchronized audio-visual modeling, world models, and sparse modeling.

Benefits

  • general: Medical, dental, vision, life and disability benefits.
  • general: Participation in the Company’s 401(k) plan.
  • general: Up to 15 to 25 days of vacation per year (depending on tenure).
  • general: Up to 13 days of holidays throughout the calendar year.
  • general: Up to 10 days of paid sick leave per year.
  • general: Eligibility for sign-on payment, relocation package, and restricted stock units (evaluated case-by-case).

Target Your Resume for "AGI Model Architect / Research Scientist in AGI Model Architecture" , Tencent

Get personalized recommendations to optimize your resume specifically for AGI Model Architect / Research Scientist in AGI Model Architecture. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "AGI Model Architect / Research Scientist in AGI Model Architecture" , Tencent

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

TencentBellevueUnited StatesTEGTEG

Related Jobs You May Like

No related jobs found at the moment.

Tencent logo

AGI Model Architect / Research Scientist in AGI Model Architecture

Tencent

AGI Model Architect / Research Scientist in AGI Model Architecture

Tencent logo

Tencent

full-time

Posted: November 8, 2025

Number of Vacancies: 1

Job Description

AGI Model Architect / Research Scientist in AGI Model Architecture

📋 Job Overview

Tencent is seeking an AGI Model Architect / Research Scientist to build core architectures for Artificial General Intelligence systems that match or surpass human-level capabilities. The role involves developing large-scale models with multimodal perception, autonomous learning, and reasoning abilities, focusing on generalization to real-world applications. The goal is to design native multimodal systems capable of understanding and generating across vision, speech, and text, while interacting deeply with the environment to advance from AGI to ASI.

📍 Location: Bellevue, Washington, United States

🏢 Business Unit: TEG

📄 Full Description

Business Unit

What the Role Entails
Job Overview:
We are committed to building the core architecture for Artificial General Intelligence (AGI) systems that match or surpass human-level capabilities. As a key contributor to our core R&D team, you will help develop large-scale models with multimodal perception, autonomous learning, and reasoning abilities, driving their generalization to real-world applications. Our goal is to design a native multimodal system—capable of understanding and generating across vision, speech, and text—while interacting deeply with the environment to catalyze the transition from AGI to ASI (Artificial Super Intelligence).Responsibilities:
Design unified large model architectures with integrated capabilities in multimodal perception, reasoning, memory, and generation (across vision/audio/text).
Build systems that support continual learning, hierarchical memory, autonomous exploration, and self-evolution.
Advance the development of agent-based systems with autonomous task planning, cross-modal interaction, tool usage, and self-improvement capabilities.
Contribute deeply to the design of core components such as general representation learning, synchronized audio-visual modeling, world models, and sparse modeling.
Key Research Areas:
Multimodal Unified Architecture: Native co-frequency modeling and cross-modal reasoning across vision, speech, and language.
Continual Learning & Memory Mechanisms: Architectures that separate long-term memory from the core model to enable memory recall and task transfer.
World Modeling & Causal Reasoning: Enabling models to predict environmental states, plan behaviors, and update cognitive structures dynamically.
Sparse & Modular Architectures: Scalable, efficient, and interpretable ultra-large sparse model design.
Self-Evolution & Active Data Generation: Mechanisms for self-growth through reinforcement learning, self-supervision, and environment interaction.
Cross-Modal Understanding & Generation: Strengthening joint generation and decision-making capabilities in real-world physical environments.
Intelligent Agent Capability Transfer: Systematic enhancement of task generalization and tool-composition skills.
 

Who We Look For
Requirements:
Expertise in Transformer-based architectures and their applications in language and multimodal domains.
Hands-on experience in building or optimizing billion-scale models; familiar with training paradigms such as SFT (Supervised Fine-tuning), RLHF (Reinforcement Learning with Human Feedback), and self-supervised learning.
Preferred qualifications include deep understanding or practical experience in one or more of the following areas:
Multimodal models (e.g., vision-language models, audio-video models)
Reinforcement learning and autonomous agent systems
Complex reasoning and planning (e.g., search + LLMs, world modeling)
Sparse modeling and dynamic routing mechanisms
Strong engineering and system thinking capabilities, with the ability to translate cutting-edge research into production-level AGI model systems.
Publications in top-tier conferences/journals such as NeurIPS, ICLR, CVPR, ACL, etc., are highly desirable.
Location State(s)
US-Washington-Bellevue
The expected base pay range for this position in the location(s) listed above is $134,900.00 to $253,400.00 per year. Actual pay may vary depending on job-related knowledge, skills, and experience.
Employees hired for this position may be eligible for a sign on payment, relocation package, and restricted stock units, which will be evaluated on a case-by-case basis.
Subject to the terms and conditions of the plans in effect, hired applicants are also eligible for medical, dental, vision, life and disability benefits, and participation in the Company’s 401(k) plan. The Employee is also eligible for up to 15 to 25 days of vacation per year (depending on the employee’s tenure), up to 13 days of holidays throughout the calendar year, and up to 10 days of paid sick leave per year.
Your benefits may be adjusted to reflect your location, employment status, duration of employment with the company, and position level. Benefits may also be pro-rated for those who start working during the calendar year.

Equal Employment Opportunity at Tencent
As an equal opportunity employer, we firmly believe that diverse voices fuel our innovation and allow us to better serve our users and the community. We foster an environment where every employee of Tencent feels supported and inspired to achieve individual and common goals.
Work Location: US-Washington-Bellevue

🎯 Key Responsibilities

  • Design unified large model architectures with integrated capabilities in multimodal perception, reasoning, memory, and generation (across vision/audio/text).
  • Build systems that support continual learning, hierarchical memory, autonomous exploration, and self-evolution.
  • Advance the development of agent-based systems with autonomous task planning, cross-modal interaction, tool usage, and self-improvement capabilities.
  • Contribute deeply to the design of core components such as general representation learning, synchronized audio-visual modeling, world models, and sparse modeling.

✅ Required Qualifications

  • Expertise in Transformer-based architectures and their applications in language and multimodal domains.
  • Hands-on experience in building or optimizing billion-scale models; familiar with training paradigms such as SFT (Supervised Fine-tuning), RLHF (Reinforcement Learning with Human Feedback), and self-supervised learning.

⭐ Preferred Qualifications

  • Deep understanding or practical experience in multimodal models (e.g., vision-language models, audio-video models).
  • Deep understanding or practical experience in reinforcement learning and autonomous agent systems.
  • Deep understanding or practical experience in complex reasoning and planning (e.g., search + LLMs, world modeling).
  • Deep understanding or practical experience in sparse modeling and dynamic routing mechanisms.
  • Publications in top-tier conferences/journals such as NeurIPS, ICLR, CVPR, ACL, etc.

🛠️ Required Skills

  • Expertise in Transformer-based architectures.
  • Experience with billion-scale models and training paradigms (SFT, RLHF, self-supervised learning).
  • Strong engineering and system thinking capabilities.
  • Ability to translate cutting-edge research into production-level AGI model systems.

🎁 Benefits

  • Medical, dental, vision, life and disability benefits.
  • Participation in the Company’s 401(k) plan.
  • Up to 15 to 25 days of vacation per year (depending on tenure).
  • Up to 13 days of holidays throughout the calendar year.
  • Up to 10 days of paid sick leave per year.
  • Eligibility for sign-on payment, relocation package, and restricted stock units (evaluated case-by-case).

Locations

  • Bellevue, Washington, United States

Salary

134,900 - 253,400 USD / yearly

Skills Required

  • Expertise in Transformer-based architectures.intermediate
  • Experience with billion-scale models and training paradigms (SFT, RLHF, self-supervised learning).intermediate
  • Strong engineering and system thinking capabilities.intermediate
  • Ability to translate cutting-edge research into production-level AGI model systems.intermediate

Required Qualifications

  • Expertise in Transformer-based architectures and their applications in language and multimodal domains. (experience)
  • Hands-on experience in building or optimizing billion-scale models; familiar with training paradigms such as SFT (Supervised Fine-tuning), RLHF (Reinforcement Learning with Human Feedback), and self-supervised learning. (experience)

Preferred Qualifications

  • Deep understanding or practical experience in multimodal models (e.g., vision-language models, audio-video models). (experience)
  • Deep understanding or practical experience in reinforcement learning and autonomous agent systems. (experience)
  • Deep understanding or practical experience in complex reasoning and planning (e.g., search + LLMs, world modeling). (experience)
  • Deep understanding or practical experience in sparse modeling and dynamic routing mechanisms. (experience)
  • Publications in top-tier conferences/journals such as NeurIPS, ICLR, CVPR, ACL, etc. (experience)

Responsibilities

  • Design unified large model architectures with integrated capabilities in multimodal perception, reasoning, memory, and generation (across vision/audio/text).
  • Build systems that support continual learning, hierarchical memory, autonomous exploration, and self-evolution.
  • Advance the development of agent-based systems with autonomous task planning, cross-modal interaction, tool usage, and self-improvement capabilities.
  • Contribute deeply to the design of core components such as general representation learning, synchronized audio-visual modeling, world models, and sparse modeling.

Benefits

  • general: Medical, dental, vision, life and disability benefits.
  • general: Participation in the Company’s 401(k) plan.
  • general: Up to 15 to 25 days of vacation per year (depending on tenure).
  • general: Up to 13 days of holidays throughout the calendar year.
  • general: Up to 10 days of paid sick leave per year.
  • general: Eligibility for sign-on payment, relocation package, and restricted stock units (evaluated case-by-case).

Target Your Resume for "AGI Model Architect / Research Scientist in AGI Model Architecture" , Tencent

Get personalized recommendations to optimize your resume specifically for AGI Model Architect / Research Scientist in AGI Model Architecture. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "AGI Model Architect / Research Scientist in AGI Model Architecture" , Tencent

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

TencentBellevueUnited StatesTEGTEG

Related Jobs You May Like

No related jobs found at the moment.