Resume and JobRESUME AND JOB
Amgen logo

Data Engineer – R&D Multi-Omics (Open)

Amgen

Data Engineer – R&D Multi-Omics (Open)

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do.

What you will do

  • Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions
  • Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks
  • Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Optimize large datasets for query performance
  • Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Explore new tools and technologies that will help to improve ETL platform performance
  • Participate in sprint planning meetings and provide estimations on technical implementation
  • Maintain documentation of processes, systems, and solutions

What we expect of you

  • Master’s degree/Bachelors Degree and 5 to 9 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience
  • 5+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms)
  • Databricks Certified Data Engineer Professional preferred

Must-Have Skills

  • Proficiency in scientific software development (e.g. Python, R, Rshiny, Plotly Dash, etc)
  • Some knowledge of CI/CD processes and cloud computing technologies (e.g. AWS, Google Cloud, etc)
  • Proficiency with SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks
  • Hands on experience with big data technologies and platforms, such as Databricks (or equivalent), Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing
  • Excellent problem-solving skills and the ability to work with large, complex datasets
  • Experience with git, CICD and the software development lifecycle
  • Experience with SQL and relational databases (e.g PostgreSQL, MySQL, Oracle) or Databricks
  • Experience with cloud computing platforms and infrastructure (AWS preferred)
  • Experience using and adopting Agile Framework
  • Basic understanding of data modeling, data warehousing, and data integration concepts
  • Experience with data visualization tools (e.g. Dash, Plotly, Spotfire)
  • Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming
  • Experience writing and maintaining technical documentation in Confluence
  • Excellent critical-thinking and problem-solving skills
  • Strong communication and collaboration skills
  • High degree of initiative and self-motivation
  • Demonstrated presentation skills
  • Ability to manage multiple priorities successfully
  • Team-oriented with a focus on achieving team goals

What you can expect of us

  • Competitive benefits
  • Collaborative culture
  • Competitive and comprehensive Total Rewards Plans that are aligned with local industry standards

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

25,000 - 45,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Proficiency in scientific software development (e.g. Python, R, Rshiny, Plotly Dash, etc)intermediate
  • Some knowledge of CI/CD processes and cloud computing technologies (e.g. AWS, Google Cloud, etc)intermediate
  • Proficiency with SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasksintermediate
  • Hands on experience with big data technologies and platforms, such as Databricks (or equivalent), Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processingintermediate
  • Excellent problem-solving skills and the ability to work with large, complex datasetsintermediate
  • Experience with git, CICD and the software development lifecycleintermediate
  • Experience with SQL and relational databases (e.g PostgreSQL, MySQL, Oracle) or Databricksintermediate
  • Experience with cloud computing platforms and infrastructure (AWS preferred)intermediate
  • Experience using and adopting Agile Frameworkintermediate
  • Basic understanding of data modeling, data warehousing, and data integration conceptsintermediate
  • Experience with data visualization tools (e.g. Dash, Plotly, Spotfire)intermediate
  • Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstormingintermediate
  • Experience writing and maintaining technical documentation in Confluenceintermediate
  • Excellent critical-thinking and problem-solving skillsintermediate
  • Strong communication and collaboration skillsintermediate
  • High degree of initiative and self-motivationintermediate
  • Demonstrated presentation skillsintermediate
  • Ability to manage multiple priorities successfullyintermediate
  • Team-oriented with a focus on achieving team goalsintermediate

Required Qualifications

  • Master’s degree/Bachelors Degree and 5 to 9 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience (experience)
  • 5+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) (experience)
  • Databricks Certified Data Engineer Professional preferred (experience)

Responsibilities

  • Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions
  • Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks
  • Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Optimize large datasets for query performance
  • Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Explore new tools and technologies that will help to improve ETL platform performance
  • Participate in sprint planning meetings and provide estimations on technical implementation
  • Maintain documentation of processes, systems, and solutions

Benefits

  • general: Competitive benefits
  • general: Collaborative culture
  • general: Competitive and comprehensive Total Rewards Plans that are aligned with local industry standards

Target Your Resume for "Data Engineer – R&D Multi-Omics (Open)" , Amgen

Get personalized recommendations to optimize your resume specifically for Data Engineer – R&D Multi-Omics (Open). Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer – R&D Multi-Omics (Open)" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.

Amgen logo

Data Engineer – R&D Multi-Omics (Open)

Amgen

Data Engineer – R&D Multi-Omics (Open)

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do.

What you will do

  • Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions
  • Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks
  • Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Optimize large datasets for query performance
  • Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Explore new tools and technologies that will help to improve ETL platform performance
  • Participate in sprint planning meetings and provide estimations on technical implementation
  • Maintain documentation of processes, systems, and solutions

What we expect of you

  • Master’s degree/Bachelors Degree and 5 to 9 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience
  • 5+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms)
  • Databricks Certified Data Engineer Professional preferred

Must-Have Skills

  • Proficiency in scientific software development (e.g. Python, R, Rshiny, Plotly Dash, etc)
  • Some knowledge of CI/CD processes and cloud computing technologies (e.g. AWS, Google Cloud, etc)
  • Proficiency with SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks
  • Hands on experience with big data technologies and platforms, such as Databricks (or equivalent), Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing
  • Excellent problem-solving skills and the ability to work with large, complex datasets
  • Experience with git, CICD and the software development lifecycle
  • Experience with SQL and relational databases (e.g PostgreSQL, MySQL, Oracle) or Databricks
  • Experience with cloud computing platforms and infrastructure (AWS preferred)
  • Experience using and adopting Agile Framework
  • Basic understanding of data modeling, data warehousing, and data integration concepts
  • Experience with data visualization tools (e.g. Dash, Plotly, Spotfire)
  • Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming
  • Experience writing and maintaining technical documentation in Confluence
  • Excellent critical-thinking and problem-solving skills
  • Strong communication and collaboration skills
  • High degree of initiative and self-motivation
  • Demonstrated presentation skills
  • Ability to manage multiple priorities successfully
  • Team-oriented with a focus on achieving team goals

What you can expect of us

  • Competitive benefits
  • Collaborative culture
  • Competitive and comprehensive Total Rewards Plans that are aligned with local industry standards

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

25,000 - 45,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Proficiency in scientific software development (e.g. Python, R, Rshiny, Plotly Dash, etc)intermediate
  • Some knowledge of CI/CD processes and cloud computing technologies (e.g. AWS, Google Cloud, etc)intermediate
  • Proficiency with SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasksintermediate
  • Hands on experience with big data technologies and platforms, such as Databricks (or equivalent), Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processingintermediate
  • Excellent problem-solving skills and the ability to work with large, complex datasetsintermediate
  • Experience with git, CICD and the software development lifecycleintermediate
  • Experience with SQL and relational databases (e.g PostgreSQL, MySQL, Oracle) or Databricksintermediate
  • Experience with cloud computing platforms and infrastructure (AWS preferred)intermediate
  • Experience using and adopting Agile Frameworkintermediate
  • Basic understanding of data modeling, data warehousing, and data integration conceptsintermediate
  • Experience with data visualization tools (e.g. Dash, Plotly, Spotfire)intermediate
  • Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstormingintermediate
  • Experience writing and maintaining technical documentation in Confluenceintermediate
  • Excellent critical-thinking and problem-solving skillsintermediate
  • Strong communication and collaboration skillsintermediate
  • High degree of initiative and self-motivationintermediate
  • Demonstrated presentation skillsintermediate
  • Ability to manage multiple priorities successfullyintermediate
  • Team-oriented with a focus on achieving team goalsintermediate

Required Qualifications

  • Master’s degree/Bachelors Degree and 5 to 9 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience (experience)
  • 5+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) (experience)
  • Databricks Certified Data Engineer Professional preferred (experience)

Responsibilities

  • Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions
  • Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks
  • Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Optimize large datasets for query performance
  • Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Explore new tools and technologies that will help to improve ETL platform performance
  • Participate in sprint planning meetings and provide estimations on technical implementation
  • Maintain documentation of processes, systems, and solutions

Benefits

  • general: Competitive benefits
  • general: Collaborative culture
  • general: Competitive and comprehensive Total Rewards Plans that are aligned with local industry standards

Target Your Resume for "Data Engineer – R&D Multi-Omics (Open)" , Amgen

Get personalized recommendations to optimize your resume specifically for Data Engineer – R&D Multi-Omics (Open). Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer – R&D Multi-Omics (Open)" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.