Resume and JobRESUME AND JOB
Amgen logo

Data Engineer

Amgen

Data Engineer

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

Join Amgen’s Mission of Serving Patients

What you will do

  • Design, develop, and maintain data solutions for data generation, collection, and processing
  • Assist in design and development of the data pipeline
  • Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems
  • Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions
  • Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate and communicate effectively with product teams
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to standard processes for coding, testing, and designing reusable code/component
  • Explore new tools and technologies that will help to improve ETL platform performance
  • Participate in sprint planning meetings and provide estimations on technical implementation

What we expect of you

  • Master's degree / bachelor’s degree and 5 to 9 years’ experience in Computer Science, IT or related field
  • Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), Snowflake, workflow orchestration, performance tuning on big data processing
  • Proficiency in data analysis tools (eg. SQL)
  • Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores
  • Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development
  • Strong understanding of data modeling, data warehousing, and data integration concepts
  • Proven ability to optimize query performance on big data platforms
  • Experience in Real World Data/ Health Care
  • Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing
  • Knowledge of Python/R, Databricks, SageMaker, cloud data platforms
  • Strong understanding of data governance frameworks, tools, and best practices
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)
  • Databricks Certificate preferred
  • AWS Data Engineer/Architect

Must-Have Skills

  • Big data technologies and platforms (Databricks, Apache Spark (PySpark, SparkSQL), Snowflake, workflow orchestration, performance tuning)
  • SQL for data analysis
  • ETL tools (Apache Spark, Python packages for data processing and machine learning)
  • Data modeling, data warehousing, data integration
  • Query performance optimization on big data platforms
  • Cloud platforms (AWS preferred)
  • Software engineering best-practices (version control, infrastructure-as-code, CI/CD, automated testing)
  • Python/R
  • Data governance frameworks
  • Data protection regulations (GDPR, CCPA)
  • Critical-thinking and problem-solving
  • Communication and collaboration
  • Team functioning
  • Presentation skills

What you can expect of us

  • Competitive benefits
  • Collaborative culture
  • Support for professional and personal growth and well-being
  • Competitive and comprehensive Total Rewards Plans aligned with local industry standards

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

30,000 - 50,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Big data technologies and platforms (Databricks, Apache Spark (PySpark, SparkSQL), Snowflake, workflow orchestration, performance tuning)intermediate
  • SQL for data analysisintermediate
  • ETL tools (Apache Spark, Python packages for data processing and machine learning)intermediate
  • Data modeling, data warehousing, data integrationintermediate
  • Query performance optimization on big data platformsintermediate
  • Cloud platforms (AWS preferred)intermediate
  • Software engineering best-practices (version control, infrastructure-as-code, CI/CD, automated testing)intermediate
  • Python/Rintermediate
  • Data governance frameworksintermediate
  • Data protection regulations (GDPR, CCPA)intermediate
  • Critical-thinking and problem-solvingintermediate
  • Communication and collaborationintermediate
  • Team functioningintermediate
  • Presentation skillsintermediate

Required Qualifications

  • Master's degree / bachelor’s degree and 5 to 9 years’ experience in Computer Science, IT or related field (experience)
  • Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), Snowflake, workflow orchestration, performance tuning on big data processing (experience)
  • Proficiency in data analysis tools (eg. SQL) (experience)
  • Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores (experience)
  • Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development (experience)
  • Strong understanding of data modeling, data warehousing, and data integration concepts (experience)
  • Proven ability to optimize query performance on big data platforms (experience)
  • Experience in Real World Data/ Health Care (experience)
  • Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing (experience)
  • Knowledge of Python/R, Databricks, SageMaker, cloud data platforms (experience)
  • Strong understanding of data governance frameworks, tools, and best practices (experience)
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) (experience)
  • Databricks Certificate preferred (experience)
  • AWS Data Engineer/Architect (experience)

Responsibilities

  • Design, develop, and maintain data solutions for data generation, collection, and processing
  • Assist in design and development of the data pipeline
  • Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems
  • Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions
  • Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate and communicate effectively with product teams
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to standard processes for coding, testing, and designing reusable code/component
  • Explore new tools and technologies that will help to improve ETL platform performance
  • Participate in sprint planning meetings and provide estimations on technical implementation

Benefits

  • general: Competitive benefits
  • general: Collaborative culture
  • general: Support for professional and personal growth and well-being
  • general: Competitive and comprehensive Total Rewards Plans aligned with local industry standards

Target Your Resume for "Data Engineer" , Amgen

Get personalized recommendations to optimize your resume specifically for Data Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.

Amgen logo

Data Engineer

Amgen

Data Engineer

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

Join Amgen’s Mission of Serving Patients

What you will do

  • Design, develop, and maintain data solutions for data generation, collection, and processing
  • Assist in design and development of the data pipeline
  • Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems
  • Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions
  • Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate and communicate effectively with product teams
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to standard processes for coding, testing, and designing reusable code/component
  • Explore new tools and technologies that will help to improve ETL platform performance
  • Participate in sprint planning meetings and provide estimations on technical implementation

What we expect of you

  • Master's degree / bachelor’s degree and 5 to 9 years’ experience in Computer Science, IT or related field
  • Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), Snowflake, workflow orchestration, performance tuning on big data processing
  • Proficiency in data analysis tools (eg. SQL)
  • Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores
  • Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development
  • Strong understanding of data modeling, data warehousing, and data integration concepts
  • Proven ability to optimize query performance on big data platforms
  • Experience in Real World Data/ Health Care
  • Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing
  • Knowledge of Python/R, Databricks, SageMaker, cloud data platforms
  • Strong understanding of data governance frameworks, tools, and best practices
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)
  • Databricks Certificate preferred
  • AWS Data Engineer/Architect

Must-Have Skills

  • Big data technologies and platforms (Databricks, Apache Spark (PySpark, SparkSQL), Snowflake, workflow orchestration, performance tuning)
  • SQL for data analysis
  • ETL tools (Apache Spark, Python packages for data processing and machine learning)
  • Data modeling, data warehousing, data integration
  • Query performance optimization on big data platforms
  • Cloud platforms (AWS preferred)
  • Software engineering best-practices (version control, infrastructure-as-code, CI/CD, automated testing)
  • Python/R
  • Data governance frameworks
  • Data protection regulations (GDPR, CCPA)
  • Critical-thinking and problem-solving
  • Communication and collaboration
  • Team functioning
  • Presentation skills

What you can expect of us

  • Competitive benefits
  • Collaborative culture
  • Support for professional and personal growth and well-being
  • Competitive and comprehensive Total Rewards Plans aligned with local industry standards

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

30,000 - 50,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Big data technologies and platforms (Databricks, Apache Spark (PySpark, SparkSQL), Snowflake, workflow orchestration, performance tuning)intermediate
  • SQL for data analysisintermediate
  • ETL tools (Apache Spark, Python packages for data processing and machine learning)intermediate
  • Data modeling, data warehousing, data integrationintermediate
  • Query performance optimization on big data platformsintermediate
  • Cloud platforms (AWS preferred)intermediate
  • Software engineering best-practices (version control, infrastructure-as-code, CI/CD, automated testing)intermediate
  • Python/Rintermediate
  • Data governance frameworksintermediate
  • Data protection regulations (GDPR, CCPA)intermediate
  • Critical-thinking and problem-solvingintermediate
  • Communication and collaborationintermediate
  • Team functioningintermediate
  • Presentation skillsintermediate

Required Qualifications

  • Master's degree / bachelor’s degree and 5 to 9 years’ experience in Computer Science, IT or related field (experience)
  • Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), Snowflake, workflow orchestration, performance tuning on big data processing (experience)
  • Proficiency in data analysis tools (eg. SQL) (experience)
  • Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores (experience)
  • Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development (experience)
  • Strong understanding of data modeling, data warehousing, and data integration concepts (experience)
  • Proven ability to optimize query performance on big data platforms (experience)
  • Experience in Real World Data/ Health Care (experience)
  • Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing (experience)
  • Knowledge of Python/R, Databricks, SageMaker, cloud data platforms (experience)
  • Strong understanding of data governance frameworks, tools, and best practices (experience)
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) (experience)
  • Databricks Certificate preferred (experience)
  • AWS Data Engineer/Architect (experience)

Responsibilities

  • Design, develop, and maintain data solutions for data generation, collection, and processing
  • Assist in design and development of the data pipeline
  • Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems
  • Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions
  • Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate and communicate effectively with product teams
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to standard processes for coding, testing, and designing reusable code/component
  • Explore new tools and technologies that will help to improve ETL platform performance
  • Participate in sprint planning meetings and provide estimations on technical implementation

Benefits

  • general: Competitive benefits
  • general: Collaborative culture
  • general: Support for professional and personal growth and well-being
  • general: Competitive and comprehensive Total Rewards Plans aligned with local industry standards

Target Your Resume for "Data Engineer" , Amgen

Get personalized recommendations to optimize your resume specifically for Data Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.