Resume and JobRESUME AND JOB
Amgen logo

Senior Data Engineer

Amgen

Senior Data Engineer

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

ABOUT AMGEN

What you will do

  • Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric.
  • Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture.
  • Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency.
  • Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance.
  • Ensure data security, compliance, and role-based access control (RBAC) across data environments.
  • Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets.
  • Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring.
  • Implement data virtualization techniques to provide seamless access to data across multiple storage systems.
  • Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals.
  • Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures.

What we expect of you

  • 9 to 12 years of Computer Science, IT or related field experience
  • AWS Certified Data Engineer preferred
  • Databricks Certificate preferred
  • Scaled Agile SAFe certification preferred

Must-Have Skills

  • Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies.
  • Proficiency in workflow orchestration, performance tuning on big data processing.
  • Strong understanding of AWS services
  • Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures.
  • Ability to quickly learn, adapt and apply new technologies
  • Strong problem-solving and analytical skills
  • Excellent communication and teamwork skills
  • Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices.
  • Experience in writing APIs to make the data available to the consumers
  • Experienced with SQL/NOSQL database, vector database for large language models
  • Experienced with data modeling and performance tuning for both OLAP and OLTP databases
  • Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops

Good-to-Have Skills

  • Good to have deep expertise in Biotech & Pharma industries

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

30,000 - 50,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies.intermediate
  • Proficiency in workflow orchestration, performance tuning on big data processing.intermediate
  • Strong understanding of AWS servicesintermediate
  • Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures.intermediate
  • Ability to quickly learn, adapt and apply new technologiesintermediate
  • Strong problem-solving and analytical skillsintermediate
  • Excellent communication and teamwork skillsintermediate
  • Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices.intermediate
  • Good to have deep expertise in Biotech & Pharma industriesintermediate
  • Experience in writing APIs to make the data available to the consumersintermediate
  • Experienced with SQL/NOSQL database, vector database for large language modelsintermediate
  • Experienced with data modeling and performance tuning for both OLAP and OLTP databasesintermediate
  • Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Opsintermediate

Required Qualifications

  • 9 to 12 years of Computer Science, IT or related field experience (experience)
  • AWS Certified Data Engineer preferred (experience)
  • Databricks Certificate preferred (experience)
  • Scaled Agile SAFe certification preferred (experience)

Responsibilities

  • Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric.
  • Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture.
  • Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency.
  • Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance.
  • Ensure data security, compliance, and role-based access control (RBAC) across data environments.
  • Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets.
  • Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring.
  • Implement data virtualization techniques to provide seamless access to data across multiple storage systems.
  • Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals.
  • Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures.

Target Your Resume for "Senior Data Engineer" , Amgen

Get personalized recommendations to optimize your resume specifically for Senior Data Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Senior Data Engineer" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.

Amgen logo

Senior Data Engineer

Amgen

Senior Data Engineer

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

ABOUT AMGEN

What you will do

  • Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric.
  • Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture.
  • Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency.
  • Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance.
  • Ensure data security, compliance, and role-based access control (RBAC) across data environments.
  • Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets.
  • Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring.
  • Implement data virtualization techniques to provide seamless access to data across multiple storage systems.
  • Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals.
  • Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures.

What we expect of you

  • 9 to 12 years of Computer Science, IT or related field experience
  • AWS Certified Data Engineer preferred
  • Databricks Certificate preferred
  • Scaled Agile SAFe certification preferred

Must-Have Skills

  • Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies.
  • Proficiency in workflow orchestration, performance tuning on big data processing.
  • Strong understanding of AWS services
  • Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures.
  • Ability to quickly learn, adapt and apply new technologies
  • Strong problem-solving and analytical skills
  • Excellent communication and teamwork skills
  • Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices.
  • Experience in writing APIs to make the data available to the consumers
  • Experienced with SQL/NOSQL database, vector database for large language models
  • Experienced with data modeling and performance tuning for both OLAP and OLTP databases
  • Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops

Good-to-Have Skills

  • Good to have deep expertise in Biotech & Pharma industries

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

30,000 - 50,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies.intermediate
  • Proficiency in workflow orchestration, performance tuning on big data processing.intermediate
  • Strong understanding of AWS servicesintermediate
  • Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures.intermediate
  • Ability to quickly learn, adapt and apply new technologiesintermediate
  • Strong problem-solving and analytical skillsintermediate
  • Excellent communication and teamwork skillsintermediate
  • Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices.intermediate
  • Good to have deep expertise in Biotech & Pharma industriesintermediate
  • Experience in writing APIs to make the data available to the consumersintermediate
  • Experienced with SQL/NOSQL database, vector database for large language modelsintermediate
  • Experienced with data modeling and performance tuning for both OLAP and OLTP databasesintermediate
  • Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Opsintermediate

Required Qualifications

  • 9 to 12 years of Computer Science, IT or related field experience (experience)
  • AWS Certified Data Engineer preferred (experience)
  • Databricks Certificate preferred (experience)
  • Scaled Agile SAFe certification preferred (experience)

Responsibilities

  • Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric.
  • Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture.
  • Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency.
  • Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance.
  • Ensure data security, compliance, and role-based access control (RBAC) across data environments.
  • Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets.
  • Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring.
  • Implement data virtualization techniques to provide seamless access to data across multiple storage systems.
  • Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals.
  • Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architectures.

Target Your Resume for "Senior Data Engineer" , Amgen

Get personalized recommendations to optimize your resume specifically for Senior Data Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Senior Data Engineer" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.