Resume and JobRESUME AND JOB
Amgen logo

Specialist Software Development Engineer

Amgen

Specialist Software Development Engineer

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

ABOUT AMGEN

What you will do

  • Lead integrations teams across different tech stacks such as Databricks, MuleSoft for both development and support
  • Design, develop, and maintain data solutions for data generation, collection, and processing using Databricks and Airflow
  • Design, build and maintain APIs and MuleSoft jobs leveraging best practices to build robust integrations between CRM and cross-functional systems
  • Assist in design and development of the data pipeline
  • Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems
  • Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions
  • Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks
  • Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs
  • Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate and communicate effectively with product teams
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Explore new tools and technologies that will help to improve ETL platform performance
  • Participate in sprint planning meetings and provide estimations on technical implementation
  • Lead Production Support and working with cross-functional teams and vendors to ensure stability and continuity

What we expect of you

  • Doctorate Degree OR Master’s degree with 6 - 11 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 8 - 13 years of experience in Computer Science, IT or related field OR Diploma with 10 - 14 years of experience in Computer Science, IT or related field

Must-Have Skills

  • Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing
  • Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools
  • Skilled in MuleSoft API and Mulesoft integration job design and development
  • Excellent problem-solving skills and the ability to work with large, complex datasets
  • Strong understanding of data governance frameworks, tools, and best practices
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)
  • Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development
  • Strong understanding of data modeling, data warehousing, and data integration concepts
  • Knowledge of AnyPoint Platform, Python/R, Databricks, SageMaker, Airflow, AWS cloud data platforms
  • Proficiency in using Databricks Assistant, and other AI tools
  • Excellent critical-thinking and problem-solving skills
  • Strong communication and collaboration skills
  • Demonstrated awareness of how to function in a team setting
  • Demonstrated presentation skills

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

50,000 - 80,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processingintermediate
  • Proficiency in data analysis tools (eg. SQL) and experience with data visualization toolsintermediate
  • Skilled in MuleSoft API and Mulesoft integration job design and developmentintermediate
  • Excellent problem-solving skills and the ability to work with large, complex datasetsintermediate
  • Strong understanding of data governance frameworks, tools, and best practicesintermediate
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)intermediate
  • Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model developmentintermediate
  • Strong understanding of data modeling, data warehousing, and data integration conceptsintermediate
  • Knowledge of AnyPoint Platform, Python/R, Databricks, SageMaker, Airflow, AWS cloud data platformsintermediate
  • Proficiency in using Databricks Assistant, and other AI toolsintermediate
  • Excellent critical-thinking and problem-solving skillsintermediate
  • Strong communication and collaboration skillsintermediate
  • Demonstrated awareness of how to function in a team settingintermediate
  • Demonstrated presentation skillsintermediate

Required Qualifications

  • Doctorate Degree OR Master’s degree with 6 - 11 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 8 - 13 years of experience in Computer Science, IT or related field OR Diploma with 10 - 14 years of experience in Computer Science, IT or related field (experience)

Responsibilities

  • Lead integrations teams across different tech stacks such as Databricks, MuleSoft for both development and support
  • Design, develop, and maintain data solutions for data generation, collection, and processing using Databricks and Airflow
  • Design, build and maintain APIs and MuleSoft jobs leveraging best practices to build robust integrations between CRM and cross-functional systems
  • Assist in design and development of the data pipeline
  • Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems
  • Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions
  • Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks
  • Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs
  • Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate and communicate effectively with product teams
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Explore new tools and technologies that will help to improve ETL platform performance
  • Participate in sprint planning meetings and provide estimations on technical implementation
  • Lead Production Support and working with cross-functional teams and vendors to ensure stability and continuity

Target Your Resume for "Specialist Software Development Engineer" , Amgen

Get personalized recommendations to optimize your resume specifically for Specialist Software Development Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Specialist Software Development Engineer" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.

Amgen logo

Specialist Software Development Engineer

Amgen

Specialist Software Development Engineer

Amgen logo

Amgen

full-time

Posted: November 12, 2025

Number of Vacancies: 1

Job Description

ABOUT AMGEN

What you will do

  • Lead integrations teams across different tech stacks such as Databricks, MuleSoft for both development and support
  • Design, develop, and maintain data solutions for data generation, collection, and processing using Databricks and Airflow
  • Design, build and maintain APIs and MuleSoft jobs leveraging best practices to build robust integrations between CRM and cross-functional systems
  • Assist in design and development of the data pipeline
  • Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems
  • Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions
  • Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks
  • Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs
  • Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate and communicate effectively with product teams
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Explore new tools and technologies that will help to improve ETL platform performance
  • Participate in sprint planning meetings and provide estimations on technical implementation
  • Lead Production Support and working with cross-functional teams and vendors to ensure stability and continuity

What we expect of you

  • Doctorate Degree OR Master’s degree with 6 - 11 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 8 - 13 years of experience in Computer Science, IT or related field OR Diploma with 10 - 14 years of experience in Computer Science, IT or related field

Must-Have Skills

  • Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing
  • Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools
  • Skilled in MuleSoft API and Mulesoft integration job design and development
  • Excellent problem-solving skills and the ability to work with large, complex datasets
  • Strong understanding of data governance frameworks, tools, and best practices
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)
  • Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development
  • Strong understanding of data modeling, data warehousing, and data integration concepts
  • Knowledge of AnyPoint Platform, Python/R, Databricks, SageMaker, Airflow, AWS cloud data platforms
  • Proficiency in using Databricks Assistant, and other AI tools
  • Excellent critical-thinking and problem-solving skills
  • Strong communication and collaboration skills
  • Demonstrated awareness of how to function in a team setting
  • Demonstrated presentation skills

Locations

  • Hyderabad, India

Salary

Salary not disclosed

Estimated Salary Rangehigh confidence

50,000 - 80,000 USD / yearly

Source: xAI estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processingintermediate
  • Proficiency in data analysis tools (eg. SQL) and experience with data visualization toolsintermediate
  • Skilled in MuleSoft API and Mulesoft integration job design and developmentintermediate
  • Excellent problem-solving skills and the ability to work with large, complex datasetsintermediate
  • Strong understanding of data governance frameworks, tools, and best practicesintermediate
  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)intermediate
  • Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model developmentintermediate
  • Strong understanding of data modeling, data warehousing, and data integration conceptsintermediate
  • Knowledge of AnyPoint Platform, Python/R, Databricks, SageMaker, Airflow, AWS cloud data platformsintermediate
  • Proficiency in using Databricks Assistant, and other AI toolsintermediate
  • Excellent critical-thinking and problem-solving skillsintermediate
  • Strong communication and collaboration skillsintermediate
  • Demonstrated awareness of how to function in a team settingintermediate
  • Demonstrated presentation skillsintermediate

Required Qualifications

  • Doctorate Degree OR Master’s degree with 6 - 11 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 8 - 13 years of experience in Computer Science, IT or related field OR Diploma with 10 - 14 years of experience in Computer Science, IT or related field (experience)

Responsibilities

  • Lead integrations teams across different tech stacks such as Databricks, MuleSoft for both development and support
  • Design, develop, and maintain data solutions for data generation, collection, and processing using Databricks and Airflow
  • Design, build and maintain APIs and MuleSoft jobs leveraging best practices to build robust integrations between CRM and cross-functional systems
  • Assist in design and development of the data pipeline
  • Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems
  • Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions
  • Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks
  • Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs
  • Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate and communicate effectively with product teams
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Identify and resolve complex data-related challenges
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Explore new tools and technologies that will help to improve ETL platform performance
  • Participate in sprint planning meetings and provide estimations on technical implementation
  • Lead Production Support and working with cross-functional teams and vendors to ensure stability and continuity

Target Your Resume for "Specialist Software Development Engineer" , Amgen

Get personalized recommendations to optimize your resume specifically for Specialist Software Development Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Specialist Software Development Engineer" , Amgen

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringCloudFull StackInformation SystemsTechnology

Related Jobs You May Like

No related jobs found at the moment.