Resume and JobRESUME AND JOB
Amgen logo

Data Engineer

Amgen

Data Engineer

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

Join Amgen’s Mission of Serving Patients

What you will do

  • Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets
  • Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems
  • Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments
  • Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms
  • Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring
  • Expert in data quality, data validation and verification frameworks
  • Innovate, explore and implement new tools and technologies to enhance efficient data processing
  • Proactively identify and implement opportunities to automate tasks and develop reusable frameworks
  • Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value
  • Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories
  • Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle
  • Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions

What we expect of you

  • Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR
  • Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR
  • Diploma and 7 to 9 years of Computer Science, IT or related field experience
  • Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies
  • Proficiency in workflow orchestration, performance tuning on big data processing
  • Strong understanding of AWS services
  • Ability to quickly learn, adapt and apply new technologies
  • Strong problem-solving and analytical skills
  • Excellent communication and teamwork skills
  • Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices
  • AWS Certified Data Engineer preferred
  • Databricks Certificate preferred
  • Scaled Agile SAFe certification preferred
  • Data Engineering experience in Biotechnology or pharma industry
  • Experience in writing APIs to make the data available to the consumers
  • Experienced with SQL/NOSQL database, vector database for large language models
  • Experienced with data modeling and performance tuning for both OLAP and OLTP databases
  • Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps

Must-Have Skills

  • PySpark
  • Scala
  • SQL
  • Databricks
  • Apache Spark
  • AWS
  • Python
  • Scaled Agile methodologies
  • Workflow orchestration
  • Performance tuning on big data processing
  • AWS services
  • JIRA
  • Confluence
  • Agile DevOps tools
  • Data quality, data validation and verification frameworks
  • ETL/ELT data pipelines
  • Data modeling
  • Data governance
  • PostgreSQL
  • MySQL
  • SQL Server
  • MongoDB
  • APIs
  • Git
  • Subversion
  • CI/CD (Jenkins, Maven)
  • Automated unit testing
  • DevOps
  • SQL/NOSQL database
  • Vector database for large language models
  • OLAP and OLTP databases

What you can expect of us

  • Competitive benefits
  • Collaborative culture
  • Support for professional and personal growth and well-being
  • Competitive and comprehensive Total Rewards Plans that are aligned with local industry standards

Locations

  • Hyderabad, India (Remote)

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

15,000 - 30,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • PySparkintermediate
  • Scalaintermediate
  • SQLintermediate
  • Databricksintermediate
  • Apache Sparkintermediate
  • AWSintermediate
  • Pythonintermediate
  • Scaled Agile methodologiesintermediate
  • Workflow orchestrationintermediate
  • Performance tuning on big data processingintermediate
  • AWS servicesintermediate
  • JIRAintermediate
  • Confluenceintermediate
  • Agile DevOps toolsintermediate
  • Data quality, data validation and verification frameworksintermediate
  • ETL/ELT data pipelinesintermediate
  • Data modelingintermediate
  • Data governanceintermediate
  • PostgreSQLintermediate
  • MySQLintermediate
  • SQL Serverintermediate
  • MongoDBintermediate
  • APIsintermediate
  • Gitintermediate
  • Subversionintermediate
  • CI/CD (Jenkins, Maven)intermediate
  • Automated unit testingintermediate
  • DevOpsintermediate
  • SQL/NOSQL databaseintermediate
  • Vector database for large language modelsintermediate
  • OLAP and OLTP databasesintermediate

Required Qualifications

  • Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR (experience)
  • Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR (experience)
  • Diploma and 7 to 9 years of Computer Science, IT or related field experience (experience)
  • Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies (experience)
  • Proficiency in workflow orchestration, performance tuning on big data processing (experience)
  • Strong understanding of AWS services (experience)
  • Ability to quickly learn, adapt and apply new technologies (experience)
  • Strong problem-solving and analytical skills (experience)
  • Excellent communication and teamwork skills (experience)
  • Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices (experience)
  • AWS Certified Data Engineer preferred (experience)
  • Databricks Certificate preferred (experience)
  • Scaled Agile SAFe certification preferred (experience)
  • Data Engineering experience in Biotechnology or pharma industry (experience)
  • Experience in writing APIs to make the data available to the consumers (experience)
  • Experienced with SQL/NOSQL database, vector database for large language models (experience)
  • Experienced with data modeling and performance tuning for both OLAP and OLTP databases (experience)
  • Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps (experience)

Responsibilities

  • Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets
  • Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems
  • Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments
  • Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms
  • Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring
  • Expert in data quality, data validation and verification frameworks
  • Innovate, explore and implement new tools and technologies to enhance efficient data processing
  • Proactively identify and implement opportunities to automate tasks and develop reusable frameworks
  • Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value
  • Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories
  • Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle
  • Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions

Benefits

  • general: Competitive benefits
  • general: Collaborative culture
  • general: Support for professional and personal growth and well-being
  • general: Competitive and comprehensive Total Rewards Plans that are aligned with local industry standards

Target Your Resume for "Data Engineer" , Amgen

Get personalized recommendations to optimize your resume specifically for Data Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.

Amgen logo

Data Engineer

Amgen

Data Engineer

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

Join Amgen’s Mission of Serving Patients

What you will do

  • Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets
  • Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems
  • Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments
  • Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms
  • Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring
  • Expert in data quality, data validation and verification frameworks
  • Innovate, explore and implement new tools and technologies to enhance efficient data processing
  • Proactively identify and implement opportunities to automate tasks and develop reusable frameworks
  • Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value
  • Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories
  • Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle
  • Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions

What we expect of you

  • Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR
  • Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR
  • Diploma and 7 to 9 years of Computer Science, IT or related field experience
  • Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies
  • Proficiency in workflow orchestration, performance tuning on big data processing
  • Strong understanding of AWS services
  • Ability to quickly learn, adapt and apply new technologies
  • Strong problem-solving and analytical skills
  • Excellent communication and teamwork skills
  • Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices
  • AWS Certified Data Engineer preferred
  • Databricks Certificate preferred
  • Scaled Agile SAFe certification preferred
  • Data Engineering experience in Biotechnology or pharma industry
  • Experience in writing APIs to make the data available to the consumers
  • Experienced with SQL/NOSQL database, vector database for large language models
  • Experienced with data modeling and performance tuning for both OLAP and OLTP databases
  • Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps

Must-Have Skills

  • PySpark
  • Scala
  • SQL
  • Databricks
  • Apache Spark
  • AWS
  • Python
  • Scaled Agile methodologies
  • Workflow orchestration
  • Performance tuning on big data processing
  • AWS services
  • JIRA
  • Confluence
  • Agile DevOps tools
  • Data quality, data validation and verification frameworks
  • ETL/ELT data pipelines
  • Data modeling
  • Data governance
  • PostgreSQL
  • MySQL
  • SQL Server
  • MongoDB
  • APIs
  • Git
  • Subversion
  • CI/CD (Jenkins, Maven)
  • Automated unit testing
  • DevOps
  • SQL/NOSQL database
  • Vector database for large language models
  • OLAP and OLTP databases

What you can expect of us

  • Competitive benefits
  • Collaborative culture
  • Support for professional and personal growth and well-being
  • Competitive and comprehensive Total Rewards Plans that are aligned with local industry standards

Locations

  • Hyderabad, India (Remote)

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

15,000 - 30,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • PySparkintermediate
  • Scalaintermediate
  • SQLintermediate
  • Databricksintermediate
  • Apache Sparkintermediate
  • AWSintermediate
  • Pythonintermediate
  • Scaled Agile methodologiesintermediate
  • Workflow orchestrationintermediate
  • Performance tuning on big data processingintermediate
  • AWS servicesintermediate
  • JIRAintermediate
  • Confluenceintermediate
  • Agile DevOps toolsintermediate
  • Data quality, data validation and verification frameworksintermediate
  • ETL/ELT data pipelinesintermediate
  • Data modelingintermediate
  • Data governanceintermediate
  • PostgreSQLintermediate
  • MySQLintermediate
  • SQL Serverintermediate
  • MongoDBintermediate
  • APIsintermediate
  • Gitintermediate
  • Subversionintermediate
  • CI/CD (Jenkins, Maven)intermediate
  • Automated unit testingintermediate
  • DevOpsintermediate
  • SQL/NOSQL databaseintermediate
  • Vector database for large language modelsintermediate
  • OLAP and OLTP databasesintermediate

Required Qualifications

  • Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR (experience)
  • Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR (experience)
  • Diploma and 7 to 9 years of Computer Science, IT or related field experience (experience)
  • Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies (experience)
  • Proficiency in workflow orchestration, performance tuning on big data processing (experience)
  • Strong understanding of AWS services (experience)
  • Ability to quickly learn, adapt and apply new technologies (experience)
  • Strong problem-solving and analytical skills (experience)
  • Excellent communication and teamwork skills (experience)
  • Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices (experience)
  • AWS Certified Data Engineer preferred (experience)
  • Databricks Certificate preferred (experience)
  • Scaled Agile SAFe certification preferred (experience)
  • Data Engineering experience in Biotechnology or pharma industry (experience)
  • Experience in writing APIs to make the data available to the consumers (experience)
  • Experienced with SQL/NOSQL database, vector database for large language models (experience)
  • Experienced with data modeling and performance tuning for both OLAP and OLTP databases (experience)
  • Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps (experience)

Responsibilities

  • Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets
  • Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems
  • Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments
  • Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms
  • Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring
  • Expert in data quality, data validation and verification frameworks
  • Innovate, explore and implement new tools and technologies to enhance efficient data processing
  • Proactively identify and implement opportunities to automate tasks and develop reusable frameworks
  • Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value
  • Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories
  • Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle
  • Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions

Benefits

  • general: Competitive benefits
  • general: Collaborative culture
  • general: Support for professional and personal growth and well-being
  • general: Competitive and comprehensive Total Rewards Plans that are aligned with local industry standards

Target Your Resume for "Data Engineer" , Amgen

Get personalized recommendations to optimize your resume specifically for Data Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.