Resume and JobRESUME AND JOB
Amgen logo

Data Platform RunOps Engineer

Amgen

Data Platform RunOps Engineer

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

Join Amgen’s Mission of Serving Patients

What you will do

  • Support and maintain cloud and big data solutions used by functional teams like Manufacturing, Commercial, Research and Development
  • Work closely with the Enterprise Data Lake delivery and platform teams to ensure that the applications are aligned with the overall architectural and development guidelines
  • Research and evaluate technical solutions including Databricks, AWS Services, Kubernetes/EKS, NoSQL databases, Data Science packages, platforms and tools with a focus on enterprise deployment capabilities like security, scalability, reliability, maintainability, cost management etc.
  • Management of Enterprise Data Lake/Fabric platform incidents related to Databricks, AWS Services, Kubernetes/EKS, NoSQL databases, platforms and tools
  • Assist in building and managing relationships with internal and external business stakeholders
  • Develop basic understanding of core business problems and identify opportunities to use advanced analytics
  • Work closely with the Enterprise Data Lake ecosystem leads to identify and evaluate emerging providers of data management & processing components that could be incorporated into data platform.
  • Work with platform stakeholders to ensure effective cost observability and control mechanisms are in place for all aspects of data platform management.
  • Experience working in Agile environments and participating in Agile ceremonies.
  • Work during US business hours to support cross-functional teams including Manufacturing, Commercial, and Research & Development in their use of Enterprise Data Lake

What we expect of you

  • Master's degree / Bachelor's degree and 5 to 9 years of relevant experience OR
  • Experience with Databricks capabilities including but not limited to setting up AI capabilities, cluster setup, execution, and tuning
  • Experience with AWS services including but not limited to MSK, IAM, EC2, EKS and S3.
  • Experience with data lake, data fabric and data mesh concepts
  • Experience with platform performance optimization
  • Experience working with relational databases
  • Experience building ETL or ELT pipelines; Hands-on experience with SQL/NoSQL
  • Knowledge of distributed systems and microservices.
  • Program skills in one or more computer languages – SQL, Python, Java
  • Experienced with software engineering best-practices, including but not limited to version control (Git, GitLab.), CI/CD (GitLab or similar), automated unit testing, and Dev Ops
  • Exposure to Jira or Jira Align.

Must-Have Skills

  • Experience in Cloud technologies AWS preferred.
  • Cloud Certifications -AWS, Databricks, Microsoft
  • Familiarity with the use of AI for development productivity, such as GitHub Copilot, Databricks Assistant, Amazon Q Developer or equivalent.
  • Knowledge of Agile and DevOps practices.
  • Skills in disaster recovery planning.
  • Familiarity with load testing tools (JMeter, Gatling).
  • Basic understanding of AI/ML for monitoring.
  • Data visualization skills (Tableau, Power BI).
  • Strong communication and leadership skills.
  • Understanding of compliance and auditing requirements.
  • Knowledge of Low code/No Code platform like Prophecy
  • Familiarity with Shell Scripting
  • Working knowledge with ServiceNow.
  • Excellent analytical and solve skills
  • Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels
  • Ability to work effectively with global, virtual teams
  • High degree of initiative and self-motivation
  • Ability to manage multiple priorities successfully
  • Team-oriented, with a focus on achieving team goals
  • Strong problem-solving and analytical skills.
  • Strong time and task leadership skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects.

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

30,000 - 60,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Experience in Cloud technologies AWS preferred.intermediate
  • Cloud Certifications -AWS, Databricks, Microsoftintermediate
  • Familiarity with the use of AI for development productivity, such as GitHub Copilot, Databricks Assistant, Amazon Q Developer or equivalent.intermediate
  • Knowledge of Agile and DevOps practices.intermediate
  • Skills in disaster recovery planning.intermediate
  • Familiarity with load testing tools (JMeter, Gatling).intermediate
  • Basic understanding of AI/ML for monitoring.intermediate
  • Data visualization skills (Tableau, Power BI).intermediate
  • Strong communication and leadership skills.intermediate
  • Understanding of compliance and auditing requirements.intermediate
  • Knowledge of Low code/No Code platform like Prophecyintermediate
  • Familiarity with Shell Scriptingintermediate
  • Working knowledge with ServiceNow.intermediate
  • Excellent analytical and solve skillsintermediate
  • Excellent written and verbal communications skills (English) in translating technology content into business-language at various levelsintermediate
  • Ability to work effectively with global, virtual teamsintermediate
  • High degree of initiative and self-motivationintermediate
  • Ability to manage multiple priorities successfullyintermediate
  • Team-oriented, with a focus on achieving team goalsintermediate
  • Strong problem-solving and analytical skills.intermediate
  • Strong time and task leadership skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects.intermediate

Required Qualifications

  • Master's degree / Bachelor's degree and 5 to 9 years of relevant experience OR (experience)
  • Experience with Databricks capabilities including but not limited to setting up AI capabilities, cluster setup, execution, and tuning (experience)
  • Experience with AWS services including but not limited to MSK, IAM, EC2, EKS and S3. (experience)
  • Experience with data lake, data fabric and data mesh concepts (experience)
  • Experience with platform performance optimization (experience)
  • Experience working with relational databases (experience)
  • Experience building ETL or ELT pipelines; Hands-on experience with SQL/NoSQL (experience)
  • Knowledge of distributed systems and microservices. (experience)
  • Program skills in one or more computer languages – SQL, Python, Java (experience)
  • Experienced with software engineering best-practices, including but not limited to version control (Git, GitLab.), CI/CD (GitLab or similar), automated unit testing, and Dev Ops (experience)
  • Exposure to Jira or Jira Align. (experience)

Responsibilities

  • Support and maintain cloud and big data solutions used by functional teams like Manufacturing, Commercial, Research and Development
  • Work closely with the Enterprise Data Lake delivery and platform teams to ensure that the applications are aligned with the overall architectural and development guidelines
  • Research and evaluate technical solutions including Databricks, AWS Services, Kubernetes/EKS, NoSQL databases, Data Science packages, platforms and tools with a focus on enterprise deployment capabilities like security, scalability, reliability, maintainability, cost management etc.
  • Management of Enterprise Data Lake/Fabric platform incidents related to Databricks, AWS Services, Kubernetes/EKS, NoSQL databases, platforms and tools
  • Assist in building and managing relationships with internal and external business stakeholders
  • Develop basic understanding of core business problems and identify opportunities to use advanced analytics
  • Work closely with the Enterprise Data Lake ecosystem leads to identify and evaluate emerging providers of data management & processing components that could be incorporated into data platform.
  • Work with platform stakeholders to ensure effective cost observability and control mechanisms are in place for all aspects of data platform management.
  • Experience working in Agile environments and participating in Agile ceremonies.
  • Work during US business hours to support cross-functional teams including Manufacturing, Commercial, and Research & Development in their use of Enterprise Data Lake

Target Your Resume for "Data Platform RunOps Engineer" , Amgen

Get personalized recommendations to optimize your resume specifically for Data Platform RunOps Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Platform RunOps Engineer" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.

Amgen logo

Data Platform RunOps Engineer

Amgen

Data Platform RunOps Engineer

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

Join Amgen’s Mission of Serving Patients

What you will do

  • Support and maintain cloud and big data solutions used by functional teams like Manufacturing, Commercial, Research and Development
  • Work closely with the Enterprise Data Lake delivery and platform teams to ensure that the applications are aligned with the overall architectural and development guidelines
  • Research and evaluate technical solutions including Databricks, AWS Services, Kubernetes/EKS, NoSQL databases, Data Science packages, platforms and tools with a focus on enterprise deployment capabilities like security, scalability, reliability, maintainability, cost management etc.
  • Management of Enterprise Data Lake/Fabric platform incidents related to Databricks, AWS Services, Kubernetes/EKS, NoSQL databases, platforms and tools
  • Assist in building and managing relationships with internal and external business stakeholders
  • Develop basic understanding of core business problems and identify opportunities to use advanced analytics
  • Work closely with the Enterprise Data Lake ecosystem leads to identify and evaluate emerging providers of data management & processing components that could be incorporated into data platform.
  • Work with platform stakeholders to ensure effective cost observability and control mechanisms are in place for all aspects of data platform management.
  • Experience working in Agile environments and participating in Agile ceremonies.
  • Work during US business hours to support cross-functional teams including Manufacturing, Commercial, and Research & Development in their use of Enterprise Data Lake

What we expect of you

  • Master's degree / Bachelor's degree and 5 to 9 years of relevant experience OR
  • Experience with Databricks capabilities including but not limited to setting up AI capabilities, cluster setup, execution, and tuning
  • Experience with AWS services including but not limited to MSK, IAM, EC2, EKS and S3.
  • Experience with data lake, data fabric and data mesh concepts
  • Experience with platform performance optimization
  • Experience working with relational databases
  • Experience building ETL or ELT pipelines; Hands-on experience with SQL/NoSQL
  • Knowledge of distributed systems and microservices.
  • Program skills in one or more computer languages – SQL, Python, Java
  • Experienced with software engineering best-practices, including but not limited to version control (Git, GitLab.), CI/CD (GitLab or similar), automated unit testing, and Dev Ops
  • Exposure to Jira or Jira Align.

Must-Have Skills

  • Experience in Cloud technologies AWS preferred.
  • Cloud Certifications -AWS, Databricks, Microsoft
  • Familiarity with the use of AI for development productivity, such as GitHub Copilot, Databricks Assistant, Amazon Q Developer or equivalent.
  • Knowledge of Agile and DevOps practices.
  • Skills in disaster recovery planning.
  • Familiarity with load testing tools (JMeter, Gatling).
  • Basic understanding of AI/ML for monitoring.
  • Data visualization skills (Tableau, Power BI).
  • Strong communication and leadership skills.
  • Understanding of compliance and auditing requirements.
  • Knowledge of Low code/No Code platform like Prophecy
  • Familiarity with Shell Scripting
  • Working knowledge with ServiceNow.
  • Excellent analytical and solve skills
  • Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels
  • Ability to work effectively with global, virtual teams
  • High degree of initiative and self-motivation
  • Ability to manage multiple priorities successfully
  • Team-oriented, with a focus on achieving team goals
  • Strong problem-solving and analytical skills.
  • Strong time and task leadership skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects.

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

30,000 - 60,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Experience in Cloud technologies AWS preferred.intermediate
  • Cloud Certifications -AWS, Databricks, Microsoftintermediate
  • Familiarity with the use of AI for development productivity, such as GitHub Copilot, Databricks Assistant, Amazon Q Developer or equivalent.intermediate
  • Knowledge of Agile and DevOps practices.intermediate
  • Skills in disaster recovery planning.intermediate
  • Familiarity with load testing tools (JMeter, Gatling).intermediate
  • Basic understanding of AI/ML for monitoring.intermediate
  • Data visualization skills (Tableau, Power BI).intermediate
  • Strong communication and leadership skills.intermediate
  • Understanding of compliance and auditing requirements.intermediate
  • Knowledge of Low code/No Code platform like Prophecyintermediate
  • Familiarity with Shell Scriptingintermediate
  • Working knowledge with ServiceNow.intermediate
  • Excellent analytical and solve skillsintermediate
  • Excellent written and verbal communications skills (English) in translating technology content into business-language at various levelsintermediate
  • Ability to work effectively with global, virtual teamsintermediate
  • High degree of initiative and self-motivationintermediate
  • Ability to manage multiple priorities successfullyintermediate
  • Team-oriented, with a focus on achieving team goalsintermediate
  • Strong problem-solving and analytical skills.intermediate
  • Strong time and task leadership skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects.intermediate

Required Qualifications

  • Master's degree / Bachelor's degree and 5 to 9 years of relevant experience OR (experience)
  • Experience with Databricks capabilities including but not limited to setting up AI capabilities, cluster setup, execution, and tuning (experience)
  • Experience with AWS services including but not limited to MSK, IAM, EC2, EKS and S3. (experience)
  • Experience with data lake, data fabric and data mesh concepts (experience)
  • Experience with platform performance optimization (experience)
  • Experience working with relational databases (experience)
  • Experience building ETL or ELT pipelines; Hands-on experience with SQL/NoSQL (experience)
  • Knowledge of distributed systems and microservices. (experience)
  • Program skills in one or more computer languages – SQL, Python, Java (experience)
  • Experienced with software engineering best-practices, including but not limited to version control (Git, GitLab.), CI/CD (GitLab or similar), automated unit testing, and Dev Ops (experience)
  • Exposure to Jira or Jira Align. (experience)

Responsibilities

  • Support and maintain cloud and big data solutions used by functional teams like Manufacturing, Commercial, Research and Development
  • Work closely with the Enterprise Data Lake delivery and platform teams to ensure that the applications are aligned with the overall architectural and development guidelines
  • Research and evaluate technical solutions including Databricks, AWS Services, Kubernetes/EKS, NoSQL databases, Data Science packages, platforms and tools with a focus on enterprise deployment capabilities like security, scalability, reliability, maintainability, cost management etc.
  • Management of Enterprise Data Lake/Fabric platform incidents related to Databricks, AWS Services, Kubernetes/EKS, NoSQL databases, platforms and tools
  • Assist in building and managing relationships with internal and external business stakeholders
  • Develop basic understanding of core business problems and identify opportunities to use advanced analytics
  • Work closely with the Enterprise Data Lake ecosystem leads to identify and evaluate emerging providers of data management & processing components that could be incorporated into data platform.
  • Work with platform stakeholders to ensure effective cost observability and control mechanisms are in place for all aspects of data platform management.
  • Experience working in Agile environments and participating in Agile ceremonies.
  • Work during US business hours to support cross-functional teams including Manufacturing, Commercial, and Research & Development in their use of Enterprise Data Lake

Target Your Resume for "Data Platform RunOps Engineer" , Amgen

Get personalized recommendations to optimize your resume specifically for Data Platform RunOps Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Platform RunOps Engineer" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.