Resume and JobRESUME AND JOB
Amgen logo

Data Engineer

Amgen

Data Engineer

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do.

What you will do

  • Design, develop, and optimize data pipelines/workflows using Databricks (Spark, Delta Lake) for ingestion, transformation, and processing of large-scale data
  • Build and manage graph database solutions (e.g., Neo4j, Stardog, Amazon Neptune) to support knowledge graphs, relationship modeling, and inference use cases
  • Leverage SPARQL, Cypher, or Gremlin to query and analyze data within graph ecosystems
  • Implement and maintain data ontologies to support semantic interoperability and consistent data classification
  • Collaborate with architects to integrate ontology models with metadata repositories and business glossaries
  • Support data governance and metadata management through integration of lineage, quality rules, and ontology mapping
  • Contribute to data cataloging and knowledge graph implementations using RDF, OWL, or similar technologies
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Apply data engineering best practices including CI/CD, version control, and code modularity
  • Participate in sprint planning meetings and provide estimations on technical implementation

What we expect of you

  • Master’s/Bachelor’s degree and 5 to 9 years of Computer Science, IT or related field experience
  • Bachelor’s or master’s degree in computer science, Data Science, or a related field

Must-Have Skills

  • Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), python for workflow orchestration, performance tuning on big data processing
  • Proficiency in data analysis tools (eg. SQL)
  • Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores
  • Strong programming skills in Python, PySpark, and SQL
  • Solid experience designing and querying Graph Databases (e.g., Allegrograph, MarkLogic)
  • Proficiency with ontology languages and tools (e.g., TopBraid, RDF, OWL, Protégé, SHACL)
  • Familiarity with SPARQL and/or Cypher for querying semantic and property graphs
  • Experience working with cloud data services (Azure, AWS, or GCP)
  • Strong understanding of data modeling, entity relationships, and semantic interoperability
  • Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing
  • Knowledge of Python/R, Databricks, cloud data platforms
  • Strong understanding of data governance frameworks, tools, and best practices
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)
  • Excellent critical-thinking and problem-solving skills
  • Strong communication and collaboration skills
  • Demonstrated awareness of how to function in a team setting
  • Demonstrated presentation skills

What you can expect of us

  • Competitive benefits
  • Collaborative culture
  • Competitive and comprehensive Total Rewards Plans that are aligned with local industry standards

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

20,000 - 35,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), python for workflow orchestration, performance tuning on big data processingintermediate
  • Proficiency in data analysis tools (eg. SQL)intermediate
  • Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data storesintermediate
  • Strong programming skills in Python, PySpark, and SQLintermediate
  • Solid experience designing and querying Graph Databases (e.g., Allegrograph, MarkLogic)intermediate
  • Proficiency with ontology languages and tools (e.g., TopBraid, RDF, OWL, Protégé, SHACL)intermediate
  • Familiarity with SPARQL and/or Cypher for querying semantic and property graphsintermediate
  • Experience working with cloud data services (Azure, AWS, or GCP)intermediate
  • Strong understanding of data modeling, entity relationships, and semantic interoperabilityintermediate
  • Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testingintermediate
  • Knowledge of Python/R, Databricks, cloud data platformsintermediate
  • Strong understanding of data governance frameworks, tools, and best practicesintermediate
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)intermediate
  • Excellent critical-thinking and problem-solving skillsintermediate
  • Strong communication and collaboration skillsintermediate
  • Demonstrated awareness of how to function in a team settingintermediate
  • Demonstrated presentation skillsintermediate

Required Qualifications

  • Master’s/Bachelor’s degree and 5 to 9 years of Computer Science, IT or related field experience (experience)
  • Bachelor’s or master’s degree in computer science, Data Science, or a related field (experience)

Responsibilities

  • Design, develop, and optimize data pipelines/workflows using Databricks (Spark, Delta Lake) for ingestion, transformation, and processing of large-scale data
  • Build and manage graph database solutions (e.g., Neo4j, Stardog, Amazon Neptune) to support knowledge graphs, relationship modeling, and inference use cases
  • Leverage SPARQL, Cypher, or Gremlin to query and analyze data within graph ecosystems
  • Implement and maintain data ontologies to support semantic interoperability and consistent data classification
  • Collaborate with architects to integrate ontology models with metadata repositories and business glossaries
  • Support data governance and metadata management through integration of lineage, quality rules, and ontology mapping
  • Contribute to data cataloging and knowledge graph implementations using RDF, OWL, or similar technologies
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Apply data engineering best practices including CI/CD, version control, and code modularity
  • Participate in sprint planning meetings and provide estimations on technical implementation

Benefits

  • general: Competitive benefits
  • general: Collaborative culture
  • general: Competitive and comprehensive Total Rewards Plans that are aligned with local industry standards

Target Your Resume for "Data Engineer" , Amgen

Get personalized recommendations to optimize your resume specifically for Data Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.

Amgen logo

Data Engineer

Amgen

Data Engineer

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do.

What you will do

  • Design, develop, and optimize data pipelines/workflows using Databricks (Spark, Delta Lake) for ingestion, transformation, and processing of large-scale data
  • Build and manage graph database solutions (e.g., Neo4j, Stardog, Amazon Neptune) to support knowledge graphs, relationship modeling, and inference use cases
  • Leverage SPARQL, Cypher, or Gremlin to query and analyze data within graph ecosystems
  • Implement and maintain data ontologies to support semantic interoperability and consistent data classification
  • Collaborate with architects to integrate ontology models with metadata repositories and business glossaries
  • Support data governance and metadata management through integration of lineage, quality rules, and ontology mapping
  • Contribute to data cataloging and knowledge graph implementations using RDF, OWL, or similar technologies
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Apply data engineering best practices including CI/CD, version control, and code modularity
  • Participate in sprint planning meetings and provide estimations on technical implementation

What we expect of you

  • Master’s/Bachelor’s degree and 5 to 9 years of Computer Science, IT or related field experience
  • Bachelor’s or master’s degree in computer science, Data Science, or a related field

Must-Have Skills

  • Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), python for workflow orchestration, performance tuning on big data processing
  • Proficiency in data analysis tools (eg. SQL)
  • Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores
  • Strong programming skills in Python, PySpark, and SQL
  • Solid experience designing and querying Graph Databases (e.g., Allegrograph, MarkLogic)
  • Proficiency with ontology languages and tools (e.g., TopBraid, RDF, OWL, Protégé, SHACL)
  • Familiarity with SPARQL and/or Cypher for querying semantic and property graphs
  • Experience working with cloud data services (Azure, AWS, or GCP)
  • Strong understanding of data modeling, entity relationships, and semantic interoperability
  • Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing
  • Knowledge of Python/R, Databricks, cloud data platforms
  • Strong understanding of data governance frameworks, tools, and best practices
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)
  • Excellent critical-thinking and problem-solving skills
  • Strong communication and collaboration skills
  • Demonstrated awareness of how to function in a team setting
  • Demonstrated presentation skills

What you can expect of us

  • Competitive benefits
  • Collaborative culture
  • Competitive and comprehensive Total Rewards Plans that are aligned with local industry standards

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

20,000 - 35,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), python for workflow orchestration, performance tuning on big data processingintermediate
  • Proficiency in data analysis tools (eg. SQL)intermediate
  • Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data storesintermediate
  • Strong programming skills in Python, PySpark, and SQLintermediate
  • Solid experience designing and querying Graph Databases (e.g., Allegrograph, MarkLogic)intermediate
  • Proficiency with ontology languages and tools (e.g., TopBraid, RDF, OWL, Protégé, SHACL)intermediate
  • Familiarity with SPARQL and/or Cypher for querying semantic and property graphsintermediate
  • Experience working with cloud data services (Azure, AWS, or GCP)intermediate
  • Strong understanding of data modeling, entity relationships, and semantic interoperabilityintermediate
  • Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testingintermediate
  • Knowledge of Python/R, Databricks, cloud data platformsintermediate
  • Strong understanding of data governance frameworks, tools, and best practicesintermediate
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)intermediate
  • Excellent critical-thinking and problem-solving skillsintermediate
  • Strong communication and collaboration skillsintermediate
  • Demonstrated awareness of how to function in a team settingintermediate
  • Demonstrated presentation skillsintermediate

Required Qualifications

  • Master’s/Bachelor’s degree and 5 to 9 years of Computer Science, IT or related field experience (experience)
  • Bachelor’s or master’s degree in computer science, Data Science, or a related field (experience)

Responsibilities

  • Design, develop, and optimize data pipelines/workflows using Databricks (Spark, Delta Lake) for ingestion, transformation, and processing of large-scale data
  • Build and manage graph database solutions (e.g., Neo4j, Stardog, Amazon Neptune) to support knowledge graphs, relationship modeling, and inference use cases
  • Leverage SPARQL, Cypher, or Gremlin to query and analyze data within graph ecosystems
  • Implement and maintain data ontologies to support semantic interoperability and consistent data classification
  • Collaborate with architects to integrate ontology models with metadata repositories and business glossaries
  • Support data governance and metadata management through integration of lineage, quality rules, and ontology mapping
  • Contribute to data cataloging and knowledge graph implementations using RDF, OWL, or similar technologies
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Apply data engineering best practices including CI/CD, version control, and code modularity
  • Participate in sprint planning meetings and provide estimations on technical implementation

Benefits

  • general: Competitive benefits
  • general: Collaborative culture
  • general: Competitive and comprehensive Total Rewards Plans that are aligned with local industry standards

Target Your Resume for "Data Engineer" , Amgen

Get personalized recommendations to optimize your resume specifically for Data Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.