Resume and JobRESUME AND JOB
Bristol-Myers Squibb logo

Senior Data & DevOps Engineer

Bristol-Myers Squibb

Senior Data & DevOps Engineer

Bristol-Myers Squibb logo

Bristol-Myers Squibb

full-time

Posted: December 2, 2025

Number of Vacancies: 1

Job Description

The Data Engineer will be responsible for designing, building, and maintaining the ETL pipelines, data products, evolution of the data products, and utilize the most suitable data architecture required for our organization's data needs. Responsible for delivering high quality, data products and analytic ready data solution. Work with an end-to-end ownership mindset, innovate and drive initiatives through completion. Develop and maintain data models to support our reporting and analysis needs. Optimize data storage and retrieval to ensure efficient performance and scalability. Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements. Ensure data quality and integrity through data validation and testing. Implement and maintain security protocols to protect sensitive data. Stay up to date with emerging trends and technologies in data engineering and analytics Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams and Data Community lead to shape and adopt data and technology strategy. Serves as the Subject Matter Expert on Data & Analytics Solutions. Knowledgeable in evolving trends in Data platforms and Product based implementation. Has end-to-end ownership mindset in driving initiatives through completion. Comfortable working in a fast-paced environment with minimal oversight. Mentors other team members effectively to unlock full potential. Prior experience working in an Agile/Product based environment. Supporting People with Disabilities 5+ years of hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions, preferably in a cloud environment. Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML is added advantage. Expertise in designing and building real time data ingestion data pipelines in AWS. Expertise in Cloud Formation . Developing and using Custom Resources in CFTs. Expertise in developing GitHub Workflows & integrating GitHub workflows with AWS Expertise with using boto3 apis for Lambda, S3, Glue, Crawlers, Data Zone (preferred) Expert in building APIs using AWS services . In-depth knowledge and hands-on experience with ASW Glue services and AWS Data engineering ecosystem. Hands-on experience developing and delivering data, ETL solutions with some of the technologies like AWS data services (Redshift, Athena, lakeformation, etc.), Cloudera Data Platform, Tableau labs is a plus . 5+ years of experience in data engineering or software development. Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Strong programming skills in languages such as Python, R, PyTorch, PySpark, Pandas, Scala etc. Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto, etc. Experience with cloud-based data technologies such as AWS, Azure, or Google Cloud Platform. Strong analytical and problem-solving skills. Excellent communication and collaboration skills Functional knowledge or prior experience in Lifesciences Research and Development domain is a plus. Experience and expertise in establishing agile and product-oriented teams that work effectively with teams in US and other global BMS site. Initiates challenging opportunities that build strong capabilities for self and team. Demonstrates a focus on improving processes, structures, and knowledge within the team. Leads in analyzing current states, deliver strong recommendations in understanding complexity in the environment, and the ability to execute to bring complex solutions to completion.​

Key Responsibilities

  • Design, build, and maintain ETL pipelines, data products, and the evolution of data products
  • Utilize the most suitable data architecture required for the organization's data needs
  • Deliver high-quality data products and analytic-ready data solutions
  • Work with an end-to-end ownership mindset, innovate and drive initiatives through completion
  • Develop and maintain data models to support reporting and analysis needs
  • Optimize data storage and retrieval to ensure efficient performance and scalability
  • Collaborate with data architects, data analysts, and data scientists to understand their data needs and ensure data infrastructure supports their requirements
  • Ensure data quality and integrity through data validation and testing
  • Implement and maintain security protocols to protect sensitive data
  • Stay up to date with emerging trends and technologies in data engineering and analytics
  • Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams, and Data Community lead to shape and adopt data and technology strategy
  • Serve as the Subject Matter Expert on Data & Analytics Solutions
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional/non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability

Required Qualifications

  • + years of hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions
  • Expertise in designing and building real-time data ingestion data pipelines in AWS
  • Expertise in Cloud Formation and developing and using Custom Resources in CFTs
  • Expertise in developing GitHub Workflows & integrating GitHub workflows with AWS
  • Expertise with using boto3 APIs for Lambda, S3, Glue, Crawlers
  • Expertise in building APIs using AWS services
  • In-depth knowledge and hands-on experience with AWS Glue services and AWS Data engineering ecosystem
  • + years of experience in data engineering or software development
  • Strong programming skills in languages such as Python, R, PyTorch, PySpark, Pandas, Scala
  • Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto

Preferred Qualifications

  • Experience with cloud-based data technologies such as AWS, Azure, or Google Cloud Platform
  • Hands-on experience developing and delivering data, ETL solutions with technologies like AWS data services (Redshift, Athena, lakeformation, etc.), Cloudera Data Platform, Tableau labs
  • Functional knowledge or prior experience in Lifesciences Research and Development domain
  • Expertise with using boto3 APIs for Data Zone
  • Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML

Skills Required

  • Strong analytical and problem-solving skills
  • Excellent communication and collaboration skills
  • Knowledgeable in evolving trends in Data platforms and Product-based implementation
  • Comfortable working in a fast-paced environment with minimal oversight
  • Mentors other team members effectively to unlock full potential
  • Prior experience working in an Agile/Product-based environment
  • Experience and expertise in establishing agile and product-oriented teams that work effectively with teams in US and other global BMS sites
  • Initiates challenging opportunities that build strong capabilities for self and team
  • Demonstrates a focus on improving processes, structures, and knowledge within the team
  • Leads in analyzing current states, delivers strong recommendations in understanding complexity in the environment, and the ability to execute to bring complex solutions to completion

Locations

  • Hyderabad TS, India

Salary

Salary not disclosed

Estimated Salary Rangemedium confidence

2,500,000 - 4,500,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Strong analytical and problem-solving skillsintermediate
  • Excellent communication and collaboration skillsintermediate
  • Knowledgeable in evolving trends in Data platforms and Product-based implementationintermediate
  • Comfortable working in a fast-paced environment with minimal oversightintermediate
  • Mentors other team members effectively to unlock full potentialintermediate
  • Prior experience working in an Agile/Product-based environmentintermediate
  • Experience and expertise in establishing agile and product-oriented teams that work effectively with teams in US and other global BMS sitesintermediate
  • Initiates challenging opportunities that build strong capabilities for self and teamintermediate
  • Demonstrates a focus on improving processes, structures, and knowledge within the teamintermediate
  • Leads in analyzing current states, delivers strong recommendations in understanding complexity in the environment, and the ability to execute to bring complex solutions to completionintermediate

Required Qualifications

  • + years of hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions (experience)
  • Expertise in designing and building real-time data ingestion data pipelines in AWS (experience)
  • Expertise in Cloud Formation and developing and using Custom Resources in CFTs (experience)
  • Expertise in developing GitHub Workflows & integrating GitHub workflows with AWS (experience)
  • Expertise with using boto3 APIs for Lambda, S3, Glue, Crawlers (experience)
  • Expertise in building APIs using AWS services (experience)
  • In-depth knowledge and hands-on experience with AWS Glue services and AWS Data engineering ecosystem (experience)
  • + years of experience in data engineering or software development (experience)
  • Strong programming skills in languages such as Python, R, PyTorch, PySpark, Pandas, Scala (experience)
  • Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto (experience)

Preferred Qualifications

  • Experience with cloud-based data technologies such as AWS, Azure, or Google Cloud Platform (experience)
  • Hands-on experience developing and delivering data, ETL solutions with technologies like AWS data services (Redshift, Athena, lakeformation, etc.), Cloudera Data Platform, Tableau labs (experience)
  • Functional knowledge or prior experience in Lifesciences Research and Development domain (experience)
  • Expertise with using boto3 APIs for Data Zone (experience)
  • Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML (experience)

Responsibilities

  • Design, build, and maintain ETL pipelines, data products, and the evolution of data products
  • Utilize the most suitable data architecture required for the organization's data needs
  • Deliver high-quality data products and analytic-ready data solutions
  • Work with an end-to-end ownership mindset, innovate and drive initiatives through completion
  • Develop and maintain data models to support reporting and analysis needs
  • Optimize data storage and retrieval to ensure efficient performance and scalability
  • Collaborate with data architects, data analysts, and data scientists to understand their data needs and ensure data infrastructure supports their requirements
  • Ensure data quality and integrity through data validation and testing
  • Implement and maintain security protocols to protect sensitive data
  • Stay up to date with emerging trends and technologies in data engineering and analytics
  • Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams, and Data Community lead to shape and adopt data and technology strategy
  • Serve as the Subject Matter Expert on Data & Analytics Solutions
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional/non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability

Target Your Resume for "Senior Data & DevOps Engineer" , Bristol-Myers Squibb

Get personalized recommendations to optimize your resume specifically for Senior Data & DevOps Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Senior Data & DevOps Engineer" , Bristol-Myers Squibb

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

PharmaceuticalPharmaceuticalHealthcare

Related Jobs You May Like

No related jobs found at the moment.

Bristol-Myers Squibb logo

Senior Data & DevOps Engineer

Bristol-Myers Squibb

Senior Data & DevOps Engineer

Bristol-Myers Squibb logo

Bristol-Myers Squibb

full-time

Posted: December 2, 2025

Number of Vacancies: 1

Job Description

The Data Engineer will be responsible for designing, building, and maintaining the ETL pipelines, data products, evolution of the data products, and utilize the most suitable data architecture required for our organization's data needs. Responsible for delivering high quality, data products and analytic ready data solution. Work with an end-to-end ownership mindset, innovate and drive initiatives through completion. Develop and maintain data models to support our reporting and analysis needs. Optimize data storage and retrieval to ensure efficient performance and scalability. Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements. Ensure data quality and integrity through data validation and testing. Implement and maintain security protocols to protect sensitive data. Stay up to date with emerging trends and technologies in data engineering and analytics Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams and Data Community lead to shape and adopt data and technology strategy. Serves as the Subject Matter Expert on Data & Analytics Solutions. Knowledgeable in evolving trends in Data platforms and Product based implementation. Has end-to-end ownership mindset in driving initiatives through completion. Comfortable working in a fast-paced environment with minimal oversight. Mentors other team members effectively to unlock full potential. Prior experience working in an Agile/Product based environment. Supporting People with Disabilities 5+ years of hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions, preferably in a cloud environment. Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML is added advantage. Expertise in designing and building real time data ingestion data pipelines in AWS. Expertise in Cloud Formation . Developing and using Custom Resources in CFTs. Expertise in developing GitHub Workflows & integrating GitHub workflows with AWS Expertise with using boto3 apis for Lambda, S3, Glue, Crawlers, Data Zone (preferred) Expert in building APIs using AWS services . In-depth knowledge and hands-on experience with ASW Glue services and AWS Data engineering ecosystem. Hands-on experience developing and delivering data, ETL solutions with some of the technologies like AWS data services (Redshift, Athena, lakeformation, etc.), Cloudera Data Platform, Tableau labs is a plus . 5+ years of experience in data engineering or software development. Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Strong programming skills in languages such as Python, R, PyTorch, PySpark, Pandas, Scala etc. Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto, etc. Experience with cloud-based data technologies such as AWS, Azure, or Google Cloud Platform. Strong analytical and problem-solving skills. Excellent communication and collaboration skills Functional knowledge or prior experience in Lifesciences Research and Development domain is a plus. Experience and expertise in establishing agile and product-oriented teams that work effectively with teams in US and other global BMS site. Initiates challenging opportunities that build strong capabilities for self and team. Demonstrates a focus on improving processes, structures, and knowledge within the team. Leads in analyzing current states, deliver strong recommendations in understanding complexity in the environment, and the ability to execute to bring complex solutions to completion.​

Key Responsibilities

  • Design, build, and maintain ETL pipelines, data products, and the evolution of data products
  • Utilize the most suitable data architecture required for the organization's data needs
  • Deliver high-quality data products and analytic-ready data solutions
  • Work with an end-to-end ownership mindset, innovate and drive initiatives through completion
  • Develop and maintain data models to support reporting and analysis needs
  • Optimize data storage and retrieval to ensure efficient performance and scalability
  • Collaborate with data architects, data analysts, and data scientists to understand their data needs and ensure data infrastructure supports their requirements
  • Ensure data quality and integrity through data validation and testing
  • Implement and maintain security protocols to protect sensitive data
  • Stay up to date with emerging trends and technologies in data engineering and analytics
  • Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams, and Data Community lead to shape and adopt data and technology strategy
  • Serve as the Subject Matter Expert on Data & Analytics Solutions
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional/non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability

Required Qualifications

  • + years of hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions
  • Expertise in designing and building real-time data ingestion data pipelines in AWS
  • Expertise in Cloud Formation and developing and using Custom Resources in CFTs
  • Expertise in developing GitHub Workflows & integrating GitHub workflows with AWS
  • Expertise with using boto3 APIs for Lambda, S3, Glue, Crawlers
  • Expertise in building APIs using AWS services
  • In-depth knowledge and hands-on experience with AWS Glue services and AWS Data engineering ecosystem
  • + years of experience in data engineering or software development
  • Strong programming skills in languages such as Python, R, PyTorch, PySpark, Pandas, Scala
  • Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto

Preferred Qualifications

  • Experience with cloud-based data technologies such as AWS, Azure, or Google Cloud Platform
  • Hands-on experience developing and delivering data, ETL solutions with technologies like AWS data services (Redshift, Athena, lakeformation, etc.), Cloudera Data Platform, Tableau labs
  • Functional knowledge or prior experience in Lifesciences Research and Development domain
  • Expertise with using boto3 APIs for Data Zone
  • Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML

Skills Required

  • Strong analytical and problem-solving skills
  • Excellent communication and collaboration skills
  • Knowledgeable in evolving trends in Data platforms and Product-based implementation
  • Comfortable working in a fast-paced environment with minimal oversight
  • Mentors other team members effectively to unlock full potential
  • Prior experience working in an Agile/Product-based environment
  • Experience and expertise in establishing agile and product-oriented teams that work effectively with teams in US and other global BMS sites
  • Initiates challenging opportunities that build strong capabilities for self and team
  • Demonstrates a focus on improving processes, structures, and knowledge within the team
  • Leads in analyzing current states, delivers strong recommendations in understanding complexity in the environment, and the ability to execute to bring complex solutions to completion

Locations

  • Hyderabad TS, India

Salary

Salary not disclosed

Estimated Salary Rangemedium confidence

2,500,000 - 4,500,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Strong analytical and problem-solving skillsintermediate
  • Excellent communication and collaboration skillsintermediate
  • Knowledgeable in evolving trends in Data platforms and Product-based implementationintermediate
  • Comfortable working in a fast-paced environment with minimal oversightintermediate
  • Mentors other team members effectively to unlock full potentialintermediate
  • Prior experience working in an Agile/Product-based environmentintermediate
  • Experience and expertise in establishing agile and product-oriented teams that work effectively with teams in US and other global BMS sitesintermediate
  • Initiates challenging opportunities that build strong capabilities for self and teamintermediate
  • Demonstrates a focus on improving processes, structures, and knowledge within the teamintermediate
  • Leads in analyzing current states, delivers strong recommendations in understanding complexity in the environment, and the ability to execute to bring complex solutions to completionintermediate

Required Qualifications

  • + years of hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions (experience)
  • Expertise in designing and building real-time data ingestion data pipelines in AWS (experience)
  • Expertise in Cloud Formation and developing and using Custom Resources in CFTs (experience)
  • Expertise in developing GitHub Workflows & integrating GitHub workflows with AWS (experience)
  • Expertise with using boto3 APIs for Lambda, S3, Glue, Crawlers (experience)
  • Expertise in building APIs using AWS services (experience)
  • In-depth knowledge and hands-on experience with AWS Glue services and AWS Data engineering ecosystem (experience)
  • + years of experience in data engineering or software development (experience)
  • Strong programming skills in languages such as Python, R, PyTorch, PySpark, Pandas, Scala (experience)
  • Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto (experience)

Preferred Qualifications

  • Experience with cloud-based data technologies such as AWS, Azure, or Google Cloud Platform (experience)
  • Hands-on experience developing and delivering data, ETL solutions with technologies like AWS data services (Redshift, Athena, lakeformation, etc.), Cloudera Data Platform, Tableau labs (experience)
  • Functional knowledge or prior experience in Lifesciences Research and Development domain (experience)
  • Expertise with using boto3 APIs for Data Zone (experience)
  • Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML (experience)

Responsibilities

  • Design, build, and maintain ETL pipelines, data products, and the evolution of data products
  • Utilize the most suitable data architecture required for the organization's data needs
  • Deliver high-quality data products and analytic-ready data solutions
  • Work with an end-to-end ownership mindset, innovate and drive initiatives through completion
  • Develop and maintain data models to support reporting and analysis needs
  • Optimize data storage and retrieval to ensure efficient performance and scalability
  • Collaborate with data architects, data analysts, and data scientists to understand their data needs and ensure data infrastructure supports their requirements
  • Ensure data quality and integrity through data validation and testing
  • Implement and maintain security protocols to protect sensitive data
  • Stay up to date with emerging trends and technologies in data engineering and analytics
  • Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams, and Data Community lead to shape and adopt data and technology strategy
  • Serve as the Subject Matter Expert on Data & Analytics Solutions
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional/non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability

Target Your Resume for "Senior Data & DevOps Engineer" , Bristol-Myers Squibb

Get personalized recommendations to optimize your resume specifically for Senior Data & DevOps Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Senior Data & DevOps Engineer" , Bristol-Myers Squibb

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

PharmaceuticalPharmaceuticalHealthcare

Related Jobs You May Like

No related jobs found at the moment.