Resume and JobRESUME AND JOB
Amgen logo

Data Engineer

Amgen

Data Engineer

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

Data Engineer

What you will do

  • Design, develop, and optimize data pipelines/workflows using Databricks (Spark, Delta Lake) for ingestion, transformation, and processing of large-scale data
  • Build ETL pipeline with Informatica or other ETL tools
  • Support data governance and metadata management
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Analyze business and technical requirements and begin translating them into simple development tasks
  • Execute unit and integration tests, and contribute to maintaining software quality
  • Identify and fix bugs and defects during development or testing phases
  • Contribute to the maintenance and support of applications by monitoring performance and reporting issues
  • Use CI/CD pipelines as part of DevOps practices and assist in the release process

What we expect of you

  • Master’s/Bachelor’s degree and 4 to 8 years of Computer Science, IT or related field experience
  • Bachelor’s or master’s degree in computer science, Data Science, or a related field
  • Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing
  • Knowledge of Python/R, Databricks, cloud data platforms
  • Strong understanding of data governance frameworks, tools, and best practices
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)
  • Professional Certifications: AWS Certified Data Engineer (preferred), Databricks Certificate (preferred)

Must-Have Skills

  • Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), python for workflow orchestration, performance tuning on big data processing
  • Proficiency in data analysis tools (eg. SQL)
  • Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores
  • Strong programming skills in Python, PySpark, and SQL
  • Familiarity with Informatica and/or other ETL tools
  • Experience working with cloud data services (Azure, AWS, or GCP)
  • Strong understanding of data modeling, entity relationships
  • Excellent problem-solving and analytical skills
  • Strong communication and interpersonal abilities
  • High attention to detail and commitment to quality
  • Ability to prioritize tasks and work under pressure
  • Team-oriented with a proactive and collaborative mindset
  • Willingness to mentor junior developers and promote best practices
  • Adaptable to changing project requirements and evolving technology

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

20,000 - 40,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), python for workflow orchestration, performance tuning on big data processingintermediate
  • Proficiency in data analysis tools (eg. SQL)intermediate
  • Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data storesintermediate
  • Strong programming skills in Python, PySpark, and SQLintermediate
  • Familiarity with Informatica and/or other ETL toolsintermediate
  • Experience working with cloud data services (Azure, AWS, or GCP)intermediate
  • Strong understanding of data modeling, entity relationshipsintermediate
  • Excellent problem-solving and analytical skillsintermediate
  • Strong communication and interpersonal abilitiesintermediate
  • High attention to detail and commitment to qualityintermediate
  • Ability to prioritize tasks and work under pressureintermediate
  • Team-oriented with a proactive and collaborative mindsetintermediate
  • Willingness to mentor junior developers and promote best practicesintermediate
  • Adaptable to changing project requirements and evolving technologyintermediate

Required Qualifications

  • Master’s/Bachelor’s degree and 4 to 8 years of Computer Science, IT or related field experience (experience)
  • Bachelor’s or master’s degree in computer science, Data Science, or a related field (experience)
  • Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing (experience)
  • Knowledge of Python/R, Databricks, cloud data platforms (experience)
  • Strong understanding of data governance frameworks, tools, and best practices (experience)
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) (experience)
  • Professional Certifications: AWS Certified Data Engineer (preferred), Databricks Certificate (preferred) (experience)

Responsibilities

  • Design, develop, and optimize data pipelines/workflows using Databricks (Spark, Delta Lake) for ingestion, transformation, and processing of large-scale data
  • Build ETL pipeline with Informatica or other ETL tools
  • Support data governance and metadata management
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Analyze business and technical requirements and begin translating them into simple development tasks
  • Execute unit and integration tests, and contribute to maintaining software quality
  • Identify and fix bugs and defects during development or testing phases
  • Contribute to the maintenance and support of applications by monitoring performance and reporting issues
  • Use CI/CD pipelines as part of DevOps practices and assist in the release process

Target Your Resume for "Data Engineer" , Amgen

Get personalized recommendations to optimize your resume specifically for Data Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.

Amgen logo

Data Engineer

Amgen

Data Engineer

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

Data Engineer

What you will do

  • Design, develop, and optimize data pipelines/workflows using Databricks (Spark, Delta Lake) for ingestion, transformation, and processing of large-scale data
  • Build ETL pipeline with Informatica or other ETL tools
  • Support data governance and metadata management
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Analyze business and technical requirements and begin translating them into simple development tasks
  • Execute unit and integration tests, and contribute to maintaining software quality
  • Identify and fix bugs and defects during development or testing phases
  • Contribute to the maintenance and support of applications by monitoring performance and reporting issues
  • Use CI/CD pipelines as part of DevOps practices and assist in the release process

What we expect of you

  • Master’s/Bachelor’s degree and 4 to 8 years of Computer Science, IT or related field experience
  • Bachelor’s or master’s degree in computer science, Data Science, or a related field
  • Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing
  • Knowledge of Python/R, Databricks, cloud data platforms
  • Strong understanding of data governance frameworks, tools, and best practices
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)
  • Professional Certifications: AWS Certified Data Engineer (preferred), Databricks Certificate (preferred)

Must-Have Skills

  • Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), python for workflow orchestration, performance tuning on big data processing
  • Proficiency in data analysis tools (eg. SQL)
  • Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores
  • Strong programming skills in Python, PySpark, and SQL
  • Familiarity with Informatica and/or other ETL tools
  • Experience working with cloud data services (Azure, AWS, or GCP)
  • Strong understanding of data modeling, entity relationships
  • Excellent problem-solving and analytical skills
  • Strong communication and interpersonal abilities
  • High attention to detail and commitment to quality
  • Ability to prioritize tasks and work under pressure
  • Team-oriented with a proactive and collaborative mindset
  • Willingness to mentor junior developers and promote best practices
  • Adaptable to changing project requirements and evolving technology

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

20,000 - 40,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), python for workflow orchestration, performance tuning on big data processingintermediate
  • Proficiency in data analysis tools (eg. SQL)intermediate
  • Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data storesintermediate
  • Strong programming skills in Python, PySpark, and SQLintermediate
  • Familiarity with Informatica and/or other ETL toolsintermediate
  • Experience working with cloud data services (Azure, AWS, or GCP)intermediate
  • Strong understanding of data modeling, entity relationshipsintermediate
  • Excellent problem-solving and analytical skillsintermediate
  • Strong communication and interpersonal abilitiesintermediate
  • High attention to detail and commitment to qualityintermediate
  • Ability to prioritize tasks and work under pressureintermediate
  • Team-oriented with a proactive and collaborative mindsetintermediate
  • Willingness to mentor junior developers and promote best practicesintermediate
  • Adaptable to changing project requirements and evolving technologyintermediate

Required Qualifications

  • Master’s/Bachelor’s degree and 4 to 8 years of Computer Science, IT or related field experience (experience)
  • Bachelor’s or master’s degree in computer science, Data Science, or a related field (experience)
  • Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing (experience)
  • Knowledge of Python/R, Databricks, cloud data platforms (experience)
  • Strong understanding of data governance frameworks, tools, and best practices (experience)
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) (experience)
  • Professional Certifications: AWS Certified Data Engineer (preferred), Databricks Certificate (preferred) (experience)

Responsibilities

  • Design, develop, and optimize data pipelines/workflows using Databricks (Spark, Delta Lake) for ingestion, transformation, and processing of large-scale data
  • Build ETL pipeline with Informatica or other ETL tools
  • Support data governance and metadata management
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Analyze business and technical requirements and begin translating them into simple development tasks
  • Execute unit and integration tests, and contribute to maintaining software quality
  • Identify and fix bugs and defects during development or testing phases
  • Contribute to the maintenance and support of applications by monitoring performance and reporting issues
  • Use CI/CD pipelines as part of DevOps practices and assist in the release process

Target Your Resume for "Data Engineer" , Amgen

Get personalized recommendations to optimize your resume specifically for Data Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.