Resume and JobRESUME AND JOB
Tencent logo

Hunyuan Multimodal Algorithm Researcher (Omni-Modal)​​

Tencent

Software and Technology Jobs

Hunyuan Multimodal Algorithm Researcher (Omni-Modal)​​

full-timePosted: Nov 19, 2025

Job Description

Hunyuan Multimodal Algorithm Researcher (Omni-Modal)​​

📋 Job Overview

Tencent is seeking a Multimodal Algorithm Researcher to advance Omni-modal large models in Palo Alto, California. The role involves researching and developing foundational models, optimizing performance, and exploring innovative architectures for multimodal understanding and generation. This position offers a competitive salary range of $182,500 to $343,200 annually, along with comprehensive benefits.

📍 Location: Palo Alto, California, United States

🏢 Business Unit: TEG

📄 Full Description

Business Unit

What the Role Entails
1.Conduct research and development of Omni multimodal large models, including the design and construction of training data, foundational model algorithm design, optimization related to pre-training/SFT/RL, model capability evaluation, and exploration of downstream application scenarios.
2.Scientifically analyze challenges in R&D, identify bottlenecks in model performance, and devise solutions based on first principles to accelerate model development and iteration, ensuring competitiveness and leading-edge performance.
3.Explore diverse paradigms for achieving Omni-modal understanding and generation capabilities, research next-generation model architectures, and push the boundaries of multimodal models.

Who We Look For
1.Bachelor’s degree (full-time preferred) or higher in Computer Science, Artificial Intelligence, Mathematics, or related fields; graduate degrees are prioritized.
2.Hands-on experience in large-scale multimodal data processing and high-quality data generation is highly preferred.
3.Solid foundation in deep learning algorithms and practical experience in large model development; familiarity with Diffusion Models and Autoregressive Models is advantageous. Publication in top-tier conferences or experience in cross-modal (e.g., audio-visual) research is preferred.
4.Proficiency in underlying implementation details of deep learning networks and operators, model tuning for training/inference, CPU/GPU acceleration, and distributed training/inference optimization; practical experience is a plus.
5.Participation in ACM or NOI competitions is highly valued.
6.Strong learning agility, communication skills, teamwork, and curiosity.
Location State(s)
US-California-Palo Alto
The expected base pay range for this position in the location(s) listed above is $182,500.00 to $343,200.00 per year. Actual pay may vary depending on job-related knowledge, skills, and experience.
Employees hired for this position may be eligible for a sign on payment, relocation package, and restricted stock units, which will be evaluated on a case-by-case basis.
Subject to the terms and conditions of the plans in effect, hired applicants are also eligible for medical, dental, vision, life and disability benefits, and participation in the Company’s 401(k) plan. The Employee is also eligible for up to 15 to 25 days of vacation per year (depending on the employee’s tenure), up to 13 days of holidays throughout the calendar year, and up to 10 days of paid sick leave per year.
Your benefits may be adjusted to reflect your location, employment status, duration of employment with the company, and position level. Benefits may also be pro-rated for those who start working during the calendar year.

Equal Employment Opportunity at Tencent
As an equal opportunity employer, we firmly believe that diverse voices fuel our innovation and allow us to better serve our users and the community. We foster an environment where every employee of Tencent feels supported and inspired to achieve individual and common goals.
Work Location: US-California-Palo Alto

🎯 Key Responsibilities

  • Conduct research and development of Omni multimodal large models, including the design and construction of training data, foundational model algorithm design, optimization related to pre-training/SFT/RL, model capability evaluation, and exploration of downstream application scenarios
  • Scientifically analyze challenges in R&D, identify bottlenecks in model performance, and devise solutions based on first principles to accelerate model development and iteration, ensuring competitiveness and leading-edge performance
  • Explore diverse paradigms for achieving Omni-modal understanding and generation capabilities, research next-generation model architectures, and push the boundaries of multimodal models

✅ Required Qualifications

  • Bachelor’s degree (full-time preferred) or higher in Computer Science, Artificial Intelligence, Mathematics, or related fields; graduate degrees are prioritized

⭐ Preferred Qualifications

  • Hands-on experience in large-scale multimodal data processing and high-quality data generation
  • Publication in top-tier conferences or experience in cross-modal (e.g., audio-visual) research
  • Practical experience in proficiency with underlying implementation details of deep learning networks and operators, model tuning for training/inference, CPU/GPU acceleration, and distributed training/inference optimization
  • Participation in ACM or NOI competitions

🛠️ Required Skills

  • Solid foundation in deep learning algorithms and practical experience in large model development
  • Familiarity with Diffusion Models and Autoregressive Models
  • Strong learning agility
  • Communication skills
  • Teamwork
  • Curiosity

🎁 Benefits

  • Sign on payment (case-by-case)
  • Relocation package (case-by-case)
  • Restricted stock units (case-by-case)
  • Medical, dental, vision, life and disability benefits
  • Participation in the Company’s 401(k) plan
  • Up to 15 to 25 days of vacation per year (depending on tenure)
  • Up to 13 days of holidays throughout the calendar year
  • Up to 10 days of paid sick leave per year

Locations

  • Palo Alto, California, United States

Salary

182,500 - 343,200 USD / yearly

Skills Required

  • Solid foundation in deep learning algorithms and practical experience in large model developmentintermediate
  • Familiarity with Diffusion Models and Autoregressive Modelsintermediate
  • Strong learning agilityintermediate
  • Communication skillsintermediate
  • Teamworkintermediate
  • Curiosityintermediate

Required Qualifications

  • Bachelor’s degree (full-time preferred) or higher in Computer Science, Artificial Intelligence, Mathematics, or related fields; graduate degrees are prioritized (experience)

Preferred Qualifications

  • Hands-on experience in large-scale multimodal data processing and high-quality data generation (experience)
  • Publication in top-tier conferences or experience in cross-modal (e.g., audio-visual) research (experience)
  • Practical experience in proficiency with underlying implementation details of deep learning networks and operators, model tuning for training/inference, CPU/GPU acceleration, and distributed training/inference optimization (experience)
  • Participation in ACM or NOI competitions (experience)

Responsibilities

  • Conduct research and development of Omni multimodal large models, including the design and construction of training data, foundational model algorithm design, optimization related to pre-training/SFT/RL, model capability evaluation, and exploration of downstream application scenarios
  • Scientifically analyze challenges in R&D, identify bottlenecks in model performance, and devise solutions based on first principles to accelerate model development and iteration, ensuring competitiveness and leading-edge performance
  • Explore diverse paradigms for achieving Omni-modal understanding and generation capabilities, research next-generation model architectures, and push the boundaries of multimodal models

Benefits

  • general: Sign on payment (case-by-case)
  • general: Relocation package (case-by-case)
  • general: Restricted stock units (case-by-case)
  • general: Medical, dental, vision, life and disability benefits
  • general: Participation in the Company’s 401(k) plan
  • general: Up to 15 to 25 days of vacation per year (depending on tenure)
  • general: Up to 13 days of holidays throughout the calendar year
  • general: Up to 10 days of paid sick leave per year

Target Your Resume for "Hunyuan Multimodal Algorithm Researcher (Omni-Modal)​​" , Tencent

Get personalized recommendations to optimize your resume specifically for Hunyuan Multimodal Algorithm Researcher (Omni-Modal)​​. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Hunyuan Multimodal Algorithm Researcher (Omni-Modal)​​" , Tencent

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

TencentPalo AltoUnited StatesTEGTEG

Answer 10 quick questions to check your fit for Hunyuan Multimodal Algorithm Researcher (Omni-Modal)​​ @ Tencent.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

Tencent logo

Hunyuan Multimodal Algorithm Researcher (Omni-Modal)​​

Tencent

Software and Technology Jobs

Hunyuan Multimodal Algorithm Researcher (Omni-Modal)​​

full-timePosted: Nov 19, 2025

Job Description

Hunyuan Multimodal Algorithm Researcher (Omni-Modal)​​

📋 Job Overview

Tencent is seeking a Multimodal Algorithm Researcher to advance Omni-modal large models in Palo Alto, California. The role involves researching and developing foundational models, optimizing performance, and exploring innovative architectures for multimodal understanding and generation. This position offers a competitive salary range of $182,500 to $343,200 annually, along with comprehensive benefits.

📍 Location: Palo Alto, California, United States

🏢 Business Unit: TEG

📄 Full Description

Business Unit

What the Role Entails
1.Conduct research and development of Omni multimodal large models, including the design and construction of training data, foundational model algorithm design, optimization related to pre-training/SFT/RL, model capability evaluation, and exploration of downstream application scenarios.
2.Scientifically analyze challenges in R&D, identify bottlenecks in model performance, and devise solutions based on first principles to accelerate model development and iteration, ensuring competitiveness and leading-edge performance.
3.Explore diverse paradigms for achieving Omni-modal understanding and generation capabilities, research next-generation model architectures, and push the boundaries of multimodal models.

Who We Look For
1.Bachelor’s degree (full-time preferred) or higher in Computer Science, Artificial Intelligence, Mathematics, or related fields; graduate degrees are prioritized.
2.Hands-on experience in large-scale multimodal data processing and high-quality data generation is highly preferred.
3.Solid foundation in deep learning algorithms and practical experience in large model development; familiarity with Diffusion Models and Autoregressive Models is advantageous. Publication in top-tier conferences or experience in cross-modal (e.g., audio-visual) research is preferred.
4.Proficiency in underlying implementation details of deep learning networks and operators, model tuning for training/inference, CPU/GPU acceleration, and distributed training/inference optimization; practical experience is a plus.
5.Participation in ACM or NOI competitions is highly valued.
6.Strong learning agility, communication skills, teamwork, and curiosity.
Location State(s)
US-California-Palo Alto
The expected base pay range for this position in the location(s) listed above is $182,500.00 to $343,200.00 per year. Actual pay may vary depending on job-related knowledge, skills, and experience.
Employees hired for this position may be eligible for a sign on payment, relocation package, and restricted stock units, which will be evaluated on a case-by-case basis.
Subject to the terms and conditions of the plans in effect, hired applicants are also eligible for medical, dental, vision, life and disability benefits, and participation in the Company’s 401(k) plan. The Employee is also eligible for up to 15 to 25 days of vacation per year (depending on the employee’s tenure), up to 13 days of holidays throughout the calendar year, and up to 10 days of paid sick leave per year.
Your benefits may be adjusted to reflect your location, employment status, duration of employment with the company, and position level. Benefits may also be pro-rated for those who start working during the calendar year.

Equal Employment Opportunity at Tencent
As an equal opportunity employer, we firmly believe that diverse voices fuel our innovation and allow us to better serve our users and the community. We foster an environment where every employee of Tencent feels supported and inspired to achieve individual and common goals.
Work Location: US-California-Palo Alto

🎯 Key Responsibilities

  • Conduct research and development of Omni multimodal large models, including the design and construction of training data, foundational model algorithm design, optimization related to pre-training/SFT/RL, model capability evaluation, and exploration of downstream application scenarios
  • Scientifically analyze challenges in R&D, identify bottlenecks in model performance, and devise solutions based on first principles to accelerate model development and iteration, ensuring competitiveness and leading-edge performance
  • Explore diverse paradigms for achieving Omni-modal understanding and generation capabilities, research next-generation model architectures, and push the boundaries of multimodal models

✅ Required Qualifications

  • Bachelor’s degree (full-time preferred) or higher in Computer Science, Artificial Intelligence, Mathematics, or related fields; graduate degrees are prioritized

⭐ Preferred Qualifications

  • Hands-on experience in large-scale multimodal data processing and high-quality data generation
  • Publication in top-tier conferences or experience in cross-modal (e.g., audio-visual) research
  • Practical experience in proficiency with underlying implementation details of deep learning networks and operators, model tuning for training/inference, CPU/GPU acceleration, and distributed training/inference optimization
  • Participation in ACM or NOI competitions

🛠️ Required Skills

  • Solid foundation in deep learning algorithms and practical experience in large model development
  • Familiarity with Diffusion Models and Autoregressive Models
  • Strong learning agility
  • Communication skills
  • Teamwork
  • Curiosity

🎁 Benefits

  • Sign on payment (case-by-case)
  • Relocation package (case-by-case)
  • Restricted stock units (case-by-case)
  • Medical, dental, vision, life and disability benefits
  • Participation in the Company’s 401(k) plan
  • Up to 15 to 25 days of vacation per year (depending on tenure)
  • Up to 13 days of holidays throughout the calendar year
  • Up to 10 days of paid sick leave per year

Locations

  • Palo Alto, California, United States

Salary

182,500 - 343,200 USD / yearly

Skills Required

  • Solid foundation in deep learning algorithms and practical experience in large model developmentintermediate
  • Familiarity with Diffusion Models and Autoregressive Modelsintermediate
  • Strong learning agilityintermediate
  • Communication skillsintermediate
  • Teamworkintermediate
  • Curiosityintermediate

Required Qualifications

  • Bachelor’s degree (full-time preferred) or higher in Computer Science, Artificial Intelligence, Mathematics, or related fields; graduate degrees are prioritized (experience)

Preferred Qualifications

  • Hands-on experience in large-scale multimodal data processing and high-quality data generation (experience)
  • Publication in top-tier conferences or experience in cross-modal (e.g., audio-visual) research (experience)
  • Practical experience in proficiency with underlying implementation details of deep learning networks and operators, model tuning for training/inference, CPU/GPU acceleration, and distributed training/inference optimization (experience)
  • Participation in ACM or NOI competitions (experience)

Responsibilities

  • Conduct research and development of Omni multimodal large models, including the design and construction of training data, foundational model algorithm design, optimization related to pre-training/SFT/RL, model capability evaluation, and exploration of downstream application scenarios
  • Scientifically analyze challenges in R&D, identify bottlenecks in model performance, and devise solutions based on first principles to accelerate model development and iteration, ensuring competitiveness and leading-edge performance
  • Explore diverse paradigms for achieving Omni-modal understanding and generation capabilities, research next-generation model architectures, and push the boundaries of multimodal models

Benefits

  • general: Sign on payment (case-by-case)
  • general: Relocation package (case-by-case)
  • general: Restricted stock units (case-by-case)
  • general: Medical, dental, vision, life and disability benefits
  • general: Participation in the Company’s 401(k) plan
  • general: Up to 15 to 25 days of vacation per year (depending on tenure)
  • general: Up to 13 days of holidays throughout the calendar year
  • general: Up to 10 days of paid sick leave per year

Target Your Resume for "Hunyuan Multimodal Algorithm Researcher (Omni-Modal)​​" , Tencent

Get personalized recommendations to optimize your resume specifically for Hunyuan Multimodal Algorithm Researcher (Omni-Modal)​​. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Hunyuan Multimodal Algorithm Researcher (Omni-Modal)​​" , Tencent

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

TencentPalo AltoUnited StatesTEGTEG

Answer 10 quick questions to check your fit for Hunyuan Multimodal Algorithm Researcher (Omni-Modal)​​ @ Tencent.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.