Resume and JobRESUME AND JOB
IBM logo

Data Engineer-Data Platforms-AWS

IBM

Data Engineer-Data Platforms-AWS

IBM logo

IBM

full-time

Posted: December 12, 2025

Number of Vacancies: 1

Job Description

Data Engineer-Data Platforms-AWS

📋 Job Overview

As a Data Engineer at IBM, you will develop, maintain, evaluate, and test big data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platforms. You will build data pipelines to ingest, process, and transform data, and develop streaming pipelines to meet the increasing data volumes using big data and cloud technologies.

📍 Location: Pune, IN (Remote/Hybrid)

💼 Career Level: Entry Level

🎯 Key Responsibilities

  • Develop, maintain, evaluate and test big data solutions
  • Build data pipelines to ingest, process, and transform data from files, streams and databases
  • Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS
  • Develop efficient software code for multiple use cases leveraging Spark Framework using Python or Scala and Big Data technologies
  • Develop streaming pipelines
  • Work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes

✅ Required Qualifications

  • Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills
  • Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala
  • Minimum 3 years of experience on Cloud Data Platforms on AWS
  • Exposure to streaming solutions and message brokers like Kafka technologies
  • Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB
  • Good to excellent SQL skills

⭐ Preferred Qualifications

  • Certification in AWS and Data Bricks or Cloudera Spark Certified developers

🛠️ Required Skills

  • Spark Framework
  • Python
  • Scala
  • Hadoop
  • AWS Cloud Data Platform
  • PySpark
  • Hive
  • Hbase
  • NoSQL databases
  • Big Data technologies
  • Streaming pipelines
  • Apache Spark
  • Kafka
  • Cloud computing
  • AWS EMR
  • AWS Glue
  • DataBricks
  • AWS RedShift
  • DynamoDB
  • SQL

🎁 Benefits & Perks

  • Opportunity to learn and develop yourself and your career
  • Encouragement to be courageous and experiment everyday
  • Continuous trust and support in an environment where everyone can thrive
  • Growth-minded culture with openness to feedback and learning
  • Collaboration with colleagues for exceptional outcomes
  • Equal-opportunity employment

Locations

  • Pune, IN, India (Remote)

Salary

Estimated Salary Rangemedium confidence

800,000 - 1,500,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Spark Frameworkintermediate
  • Pythonintermediate
  • Scalaintermediate
  • Hadoopintermediate
  • AWS Cloud Data Platformintermediate
  • PySparkintermediate
  • Hiveintermediate
  • Hbaseintermediate
  • NoSQL databasesintermediate
  • Big Data technologiesintermediate
  • Streaming pipelinesintermediate
  • Apache Sparkintermediate
  • Kafkaintermediate
  • Cloud computingintermediate
  • AWS EMRintermediate
  • AWS Glueintermediate
  • DataBricksintermediate
  • AWS RedShiftintermediate
  • DynamoDBintermediate
  • SQLintermediate

Required Qualifications

  • Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills (experience)
  • Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala (experience)
  • Minimum 3 years of experience on Cloud Data Platforms on AWS (experience)
  • Exposure to streaming solutions and message brokers like Kafka technologies (experience)
  • Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB (experience)
  • Good to excellent SQL skills (experience)

Preferred Qualifications

  • Certification in AWS and Data Bricks or Cloudera Spark Certified developers (experience)

Responsibilities

  • Develop, maintain, evaluate and test big data solutions
  • Build data pipelines to ingest, process, and transform data from files, streams and databases
  • Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS
  • Develop efficient software code for multiple use cases leveraging Spark Framework using Python or Scala and Big Data technologies
  • Develop streaming pipelines
  • Work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes

Benefits

  • general: Opportunity to learn and develop yourself and your career
  • general: Encouragement to be courageous and experiment everyday
  • general: Continuous trust and support in an environment where everyone can thrive
  • general: Growth-minded culture with openness to feedback and learning
  • general: Collaboration with colleagues for exceptional outcomes
  • general: Equal-opportunity employment

Target Your Resume for "Data Engineer-Data Platforms-AWS" , IBM

Get personalized recommendations to optimize your resume specifically for Data Engineer-Data Platforms-AWS. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer-Data Platforms-AWS" , IBM

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringSoftware Engineering

Related Jobs You May Like

No related jobs found at the moment.

IBM logo

Data Engineer-Data Platforms-AWS

IBM

Data Engineer-Data Platforms-AWS

IBM logo

IBM

full-time

Posted: December 12, 2025

Number of Vacancies: 1

Job Description

Data Engineer-Data Platforms-AWS

📋 Job Overview

As a Data Engineer at IBM, you will develop, maintain, evaluate, and test big data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platforms. You will build data pipelines to ingest, process, and transform data, and develop streaming pipelines to meet the increasing data volumes using big data and cloud technologies.

📍 Location: Pune, IN (Remote/Hybrid)

💼 Career Level: Entry Level

🎯 Key Responsibilities

  • Develop, maintain, evaluate and test big data solutions
  • Build data pipelines to ingest, process, and transform data from files, streams and databases
  • Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS
  • Develop efficient software code for multiple use cases leveraging Spark Framework using Python or Scala and Big Data technologies
  • Develop streaming pipelines
  • Work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes

✅ Required Qualifications

  • Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills
  • Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala
  • Minimum 3 years of experience on Cloud Data Platforms on AWS
  • Exposure to streaming solutions and message brokers like Kafka technologies
  • Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB
  • Good to excellent SQL skills

⭐ Preferred Qualifications

  • Certification in AWS and Data Bricks or Cloudera Spark Certified developers

🛠️ Required Skills

  • Spark Framework
  • Python
  • Scala
  • Hadoop
  • AWS Cloud Data Platform
  • PySpark
  • Hive
  • Hbase
  • NoSQL databases
  • Big Data technologies
  • Streaming pipelines
  • Apache Spark
  • Kafka
  • Cloud computing
  • AWS EMR
  • AWS Glue
  • DataBricks
  • AWS RedShift
  • DynamoDB
  • SQL

🎁 Benefits & Perks

  • Opportunity to learn and develop yourself and your career
  • Encouragement to be courageous and experiment everyday
  • Continuous trust and support in an environment where everyone can thrive
  • Growth-minded culture with openness to feedback and learning
  • Collaboration with colleagues for exceptional outcomes
  • Equal-opportunity employment

Locations

  • Pune, IN, India (Remote)

Salary

Estimated Salary Rangemedium confidence

800,000 - 1,500,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Spark Frameworkintermediate
  • Pythonintermediate
  • Scalaintermediate
  • Hadoopintermediate
  • AWS Cloud Data Platformintermediate
  • PySparkintermediate
  • Hiveintermediate
  • Hbaseintermediate
  • NoSQL databasesintermediate
  • Big Data technologiesintermediate
  • Streaming pipelinesintermediate
  • Apache Sparkintermediate
  • Kafkaintermediate
  • Cloud computingintermediate
  • AWS EMRintermediate
  • AWS Glueintermediate
  • DataBricksintermediate
  • AWS RedShiftintermediate
  • DynamoDBintermediate
  • SQLintermediate

Required Qualifications

  • Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills (experience)
  • Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala (experience)
  • Minimum 3 years of experience on Cloud Data Platforms on AWS (experience)
  • Exposure to streaming solutions and message brokers like Kafka technologies (experience)
  • Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB (experience)
  • Good to excellent SQL skills (experience)

Preferred Qualifications

  • Certification in AWS and Data Bricks or Cloudera Spark Certified developers (experience)

Responsibilities

  • Develop, maintain, evaluate and test big data solutions
  • Build data pipelines to ingest, process, and transform data from files, streams and databases
  • Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS
  • Develop efficient software code for multiple use cases leveraging Spark Framework using Python or Scala and Big Data technologies
  • Develop streaming pipelines
  • Work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes

Benefits

  • general: Opportunity to learn and develop yourself and your career
  • general: Encouragement to be courageous and experiment everyday
  • general: Continuous trust and support in an environment where everyone can thrive
  • general: Growth-minded culture with openness to feedback and learning
  • general: Collaboration with colleagues for exceptional outcomes
  • general: Equal-opportunity employment

Target Your Resume for "Data Engineer-Data Platforms-AWS" , IBM

Get personalized recommendations to optimize your resume specifically for Data Engineer-Data Platforms-AWS. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer-Data Platforms-AWS" , IBM

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringSoftware Engineering

Related Jobs You May Like

No related jobs found at the moment.