Resume and JobRESUME AND JOB
Amgen logo

Data Solution Architect (Data Engineering)

Amgen

Data Solution Architect (Data Engineering)

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

ABOUT AMGENAmgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today.What you will doRole Description:We are seeking a Data Solutions Architect to design, implement, and optimize scalable and high-performance data solutions t

What you will do

  • Design and implement scalable, modular, and future-proof data architectures that support enterprise data lakes, data warehouses, and real-time analytics.
  • Develop enterprise-wide data frameworks that enable governed, secure, and accessible data across various business domains.
  • Define data modeling strategies to support structured and unstructured data, ensuring efficiency, consistency, and usability across analytical platforms.
  • Lead the development of high-performance data pipelines for batch and real-time data processing, integrating APIs, streaming sources, transactional systems, and external data platforms.
  • Optimize query performance, indexing, caching, and storage strategies to enhance scalability, cost efficiency, and analytical capabilities.
  • Establish data interoperability frameworks that enable seamless integration across multiple data sources and platforms.
  • Drive data governance strategies, ensuring security, compliance, access controls, and lineage tracking are embedded into enterprise data solutions.
  • Implement DataOps best practices, including CI/CD for data pipelines, automated monitoring, and proactive issue resolution, to improve operational efficiency.
  • Lead Scaled Agile (SAFe) practices, facilitating Program Increment (PI) Planning, Sprint Planning, and Agile ceremonies, ensuring iterative delivery of enterprise data capabilities.
  • Collaborate with business stakeholders, product teams, and technology leaders to align data architecture strategies with organizational goals.
  • Act as a trusted advisor on emerging data technologies and trends, ensuring that the enterprise adopts cutting-edge data solutions that provide competitive advantage and long-term scalability.

What we expect of you

  • Doctorate Degree with 6-8 + years of experience in Computer Science, IT or related field
  • OR Master’s degree with 8-10 + years of experience in Computer Science, IT or related field
  • OR Bachelor’s degree with 10-12 + years of experience in Computer Science, IT or related field
  • AWS Certified Data Engineer preferred
  • Databricks Certificate preferred

Must-Have Skills

  • Experience in data architecture, enterprise data management, and cloud-based analytics solutions.
  • Expertise in Databricks, cloud-native data platforms, and distributed computing frameworks.
  • Strong proficiency in modern data modeling techniques, including dimensional modeling, NoSQL, and data virtualization.
  • Experience designing high-performance ETL/ELT pipelines and real-time data processing solutions.
  • Deep understanding of data governance, security, metadata management, and access control frameworks.
  • Hands-on experience with CI/CD for data solutions, DataOps automation, and infrastructure as code (IaaC).
  • Proven ability to collaborate with cross-functional teams, including business executives, data engineers, and analytics teams, to drive successful data initiatives.
  • Strong problem-solving, strategic thinking, and technical leadership skills.
  • Experienced with SQL/NOSQL database, vector database for large language models
  • Experienced with data modeling and performance tuning for both OLAP and OLTP databases
  • Experienced with Apache Spark
  • Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops
  • Experience with Data Mesh architectures and federated data governance models.
  • Certification in cloud data platforms or enterprise architecture frameworks.
  • Knowledge of AI/ML pipeline integration within enterprise data architectures.
  • Familiarity with BI & analytics platforms for enabling self-service analytics and enterprise reporting.
  • Excellent analytical and troubleshooting skills.
  • Strong verbal and written communication skills
  • Ability to work effectively with global, virtual teams
  • High degree of initiative and self-motivation.
  • Ability to manage multiple priorities successfully.
  • Team-oriented, with a focus on achieving team goals.
  • Ability to learn quickly, be organized and detail oriented.
  • Strong presentation and public speaking skills.

Good-to-Have Skills

  • Good to have deep expertise in Biotech & Pharma industries

What you can expect of us

  • Competitive benefits
  • Collaborative culture
  • Support for professional and personal growth and well-being

Compensation

6-8

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

80,000 - 120,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Experience in data architecture, enterprise data management, and cloud-based analytics solutions.intermediate
  • Expertise in Databricks, cloud-native data platforms, and distributed computing frameworks.intermediate
  • Strong proficiency in modern data modeling techniques, including dimensional modeling, NoSQL, and data virtualization.intermediate
  • Experience designing high-performance ETL/ELT pipelines and real-time data processing solutions.intermediate
  • Deep understanding of data governance, security, metadata management, and access control frameworks.intermediate
  • Hands-on experience with CI/CD for data solutions, DataOps automation, and infrastructure as code (IaaC).intermediate
  • Proven ability to collaborate with cross-functional teams, including business executives, data engineers, and analytics teams, to drive successful data initiatives.intermediate
  • Strong problem-solving, strategic thinking, and technical leadership skills.intermediate
  • Experienced with SQL/NOSQL database, vector database for large language modelsintermediate
  • Experienced with data modeling and performance tuning for both OLAP and OLTP databasesintermediate
  • Experienced with Apache Sparkintermediate
  • Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Opsintermediate
  • Good to have deep expertise in Biotech & Pharma industriesintermediate
  • Experience with Data Mesh architectures and federated data governance models.intermediate
  • Certification in cloud data platforms or enterprise architecture frameworks.intermediate
  • Knowledge of AI/ML pipeline integration within enterprise data architectures.intermediate
  • Familiarity with BI & analytics platforms for enabling self-service analytics and enterprise reporting.intermediate
  • Excellent analytical and troubleshooting skills.intermediate
  • Strong verbal and written communication skillsintermediate
  • Ability to work effectively with global, virtual teamsintermediate
  • High degree of initiative and self-motivation.intermediate
  • Ability to manage multiple priorities successfully.intermediate
  • Team-oriented, with a focus on achieving team goals.intermediate
  • Ability to learn quickly, be organized and detail oriented.intermediate
  • Strong presentation and public speaking skills.intermediate

Required Qualifications

  • Doctorate Degree with 6-8 + years of experience in Computer Science, IT or related field (experience)
  • OR Master’s degree with 8-10 + years of experience in Computer Science, IT or related field (experience)
  • OR Bachelor’s degree with 10-12 + years of experience in Computer Science, IT or related field (experience)
  • AWS Certified Data Engineer preferred (experience)
  • Databricks Certificate preferred (experience)

Responsibilities

  • Design and implement scalable, modular, and future-proof data architectures that support enterprise data lakes, data warehouses, and real-time analytics.
  • Develop enterprise-wide data frameworks that enable governed, secure, and accessible data across various business domains.
  • Define data modeling strategies to support structured and unstructured data, ensuring efficiency, consistency, and usability across analytical platforms.
  • Lead the development of high-performance data pipelines for batch and real-time data processing, integrating APIs, streaming sources, transactional systems, and external data platforms.
  • Optimize query performance, indexing, caching, and storage strategies to enhance scalability, cost efficiency, and analytical capabilities.
  • Establish data interoperability frameworks that enable seamless integration across multiple data sources and platforms.
  • Drive data governance strategies, ensuring security, compliance, access controls, and lineage tracking are embedded into enterprise data solutions.
  • Implement DataOps best practices, including CI/CD for data pipelines, automated monitoring, and proactive issue resolution, to improve operational efficiency.
  • Lead Scaled Agile (SAFe) practices, facilitating Program Increment (PI) Planning, Sprint Planning, and Agile ceremonies, ensuring iterative delivery of enterprise data capabilities.
  • Collaborate with business stakeholders, product teams, and technology leaders to align data architecture strategies with organizational goals.
  • Act as a trusted advisor on emerging data technologies and trends, ensuring that the enterprise adopts cutting-edge data solutions that provide competitive advantage and long-term scalability.

Benefits

  • general: Competitive benefits
  • general: Collaborative culture
  • general: Support for professional and personal growth and well-being

Target Your Resume for "Data Solution Architect (Data Engineering)" , Amgen

Get personalized recommendations to optimize your resume specifically for Data Solution Architect (Data Engineering). Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Solution Architect (Data Engineering)" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.

Amgen logo

Data Solution Architect (Data Engineering)

Amgen

Data Solution Architect (Data Engineering)

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

ABOUT AMGENAmgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today.What you will doRole Description:We are seeking a Data Solutions Architect to design, implement, and optimize scalable and high-performance data solutions t

What you will do

  • Design and implement scalable, modular, and future-proof data architectures that support enterprise data lakes, data warehouses, and real-time analytics.
  • Develop enterprise-wide data frameworks that enable governed, secure, and accessible data across various business domains.
  • Define data modeling strategies to support structured and unstructured data, ensuring efficiency, consistency, and usability across analytical platforms.
  • Lead the development of high-performance data pipelines for batch and real-time data processing, integrating APIs, streaming sources, transactional systems, and external data platforms.
  • Optimize query performance, indexing, caching, and storage strategies to enhance scalability, cost efficiency, and analytical capabilities.
  • Establish data interoperability frameworks that enable seamless integration across multiple data sources and platforms.
  • Drive data governance strategies, ensuring security, compliance, access controls, and lineage tracking are embedded into enterprise data solutions.
  • Implement DataOps best practices, including CI/CD for data pipelines, automated monitoring, and proactive issue resolution, to improve operational efficiency.
  • Lead Scaled Agile (SAFe) practices, facilitating Program Increment (PI) Planning, Sprint Planning, and Agile ceremonies, ensuring iterative delivery of enterprise data capabilities.
  • Collaborate with business stakeholders, product teams, and technology leaders to align data architecture strategies with organizational goals.
  • Act as a trusted advisor on emerging data technologies and trends, ensuring that the enterprise adopts cutting-edge data solutions that provide competitive advantage and long-term scalability.

What we expect of you

  • Doctorate Degree with 6-8 + years of experience in Computer Science, IT or related field
  • OR Master’s degree with 8-10 + years of experience in Computer Science, IT or related field
  • OR Bachelor’s degree with 10-12 + years of experience in Computer Science, IT or related field
  • AWS Certified Data Engineer preferred
  • Databricks Certificate preferred

Must-Have Skills

  • Experience in data architecture, enterprise data management, and cloud-based analytics solutions.
  • Expertise in Databricks, cloud-native data platforms, and distributed computing frameworks.
  • Strong proficiency in modern data modeling techniques, including dimensional modeling, NoSQL, and data virtualization.
  • Experience designing high-performance ETL/ELT pipelines and real-time data processing solutions.
  • Deep understanding of data governance, security, metadata management, and access control frameworks.
  • Hands-on experience with CI/CD for data solutions, DataOps automation, and infrastructure as code (IaaC).
  • Proven ability to collaborate with cross-functional teams, including business executives, data engineers, and analytics teams, to drive successful data initiatives.
  • Strong problem-solving, strategic thinking, and technical leadership skills.
  • Experienced with SQL/NOSQL database, vector database for large language models
  • Experienced with data modeling and performance tuning for both OLAP and OLTP databases
  • Experienced with Apache Spark
  • Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops
  • Experience with Data Mesh architectures and federated data governance models.
  • Certification in cloud data platforms or enterprise architecture frameworks.
  • Knowledge of AI/ML pipeline integration within enterprise data architectures.
  • Familiarity with BI & analytics platforms for enabling self-service analytics and enterprise reporting.
  • Excellent analytical and troubleshooting skills.
  • Strong verbal and written communication skills
  • Ability to work effectively with global, virtual teams
  • High degree of initiative and self-motivation.
  • Ability to manage multiple priorities successfully.
  • Team-oriented, with a focus on achieving team goals.
  • Ability to learn quickly, be organized and detail oriented.
  • Strong presentation and public speaking skills.

Good-to-Have Skills

  • Good to have deep expertise in Biotech & Pharma industries

What you can expect of us

  • Competitive benefits
  • Collaborative culture
  • Support for professional and personal growth and well-being

Compensation

6-8

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

80,000 - 120,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Experience in data architecture, enterprise data management, and cloud-based analytics solutions.intermediate
  • Expertise in Databricks, cloud-native data platforms, and distributed computing frameworks.intermediate
  • Strong proficiency in modern data modeling techniques, including dimensional modeling, NoSQL, and data virtualization.intermediate
  • Experience designing high-performance ETL/ELT pipelines and real-time data processing solutions.intermediate
  • Deep understanding of data governance, security, metadata management, and access control frameworks.intermediate
  • Hands-on experience with CI/CD for data solutions, DataOps automation, and infrastructure as code (IaaC).intermediate
  • Proven ability to collaborate with cross-functional teams, including business executives, data engineers, and analytics teams, to drive successful data initiatives.intermediate
  • Strong problem-solving, strategic thinking, and technical leadership skills.intermediate
  • Experienced with SQL/NOSQL database, vector database for large language modelsintermediate
  • Experienced with data modeling and performance tuning for both OLAP and OLTP databasesintermediate
  • Experienced with Apache Sparkintermediate
  • Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Opsintermediate
  • Good to have deep expertise in Biotech & Pharma industriesintermediate
  • Experience with Data Mesh architectures and federated data governance models.intermediate
  • Certification in cloud data platforms or enterprise architecture frameworks.intermediate
  • Knowledge of AI/ML pipeline integration within enterprise data architectures.intermediate
  • Familiarity with BI & analytics platforms for enabling self-service analytics and enterprise reporting.intermediate
  • Excellent analytical and troubleshooting skills.intermediate
  • Strong verbal and written communication skillsintermediate
  • Ability to work effectively with global, virtual teamsintermediate
  • High degree of initiative and self-motivation.intermediate
  • Ability to manage multiple priorities successfully.intermediate
  • Team-oriented, with a focus on achieving team goals.intermediate
  • Ability to learn quickly, be organized and detail oriented.intermediate
  • Strong presentation and public speaking skills.intermediate

Required Qualifications

  • Doctorate Degree with 6-8 + years of experience in Computer Science, IT or related field (experience)
  • OR Master’s degree with 8-10 + years of experience in Computer Science, IT or related field (experience)
  • OR Bachelor’s degree with 10-12 + years of experience in Computer Science, IT or related field (experience)
  • AWS Certified Data Engineer preferred (experience)
  • Databricks Certificate preferred (experience)

Responsibilities

  • Design and implement scalable, modular, and future-proof data architectures that support enterprise data lakes, data warehouses, and real-time analytics.
  • Develop enterprise-wide data frameworks that enable governed, secure, and accessible data across various business domains.
  • Define data modeling strategies to support structured and unstructured data, ensuring efficiency, consistency, and usability across analytical platforms.
  • Lead the development of high-performance data pipelines for batch and real-time data processing, integrating APIs, streaming sources, transactional systems, and external data platforms.
  • Optimize query performance, indexing, caching, and storage strategies to enhance scalability, cost efficiency, and analytical capabilities.
  • Establish data interoperability frameworks that enable seamless integration across multiple data sources and platforms.
  • Drive data governance strategies, ensuring security, compliance, access controls, and lineage tracking are embedded into enterprise data solutions.
  • Implement DataOps best practices, including CI/CD for data pipelines, automated monitoring, and proactive issue resolution, to improve operational efficiency.
  • Lead Scaled Agile (SAFe) practices, facilitating Program Increment (PI) Planning, Sprint Planning, and Agile ceremonies, ensuring iterative delivery of enterprise data capabilities.
  • Collaborate with business stakeholders, product teams, and technology leaders to align data architecture strategies with organizational goals.
  • Act as a trusted advisor on emerging data technologies and trends, ensuring that the enterprise adopts cutting-edge data solutions that provide competitive advantage and long-term scalability.

Benefits

  • general: Competitive benefits
  • general: Collaborative culture
  • general: Support for professional and personal growth and well-being

Target Your Resume for "Data Solution Architect (Data Engineering)" , Amgen

Get personalized recommendations to optimize your resume specifically for Data Solution Architect (Data Engineering). Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Solution Architect (Data Engineering)" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.