Resume and JobRESUME AND JOB
IBM logo

Data Engineer-Data Platforms

IBM

Software and Technology Jobs

Data Engineer-Data Platforms

full-timePosted: Dec 12, 2025

Job Description

Data Engineer-Data Platforms

📋 Job Overview

As a Data Engineer at IBM, you will work in our Consulting Client Innovation Centers to deliver technical expertise to clients globally. Your role involves developing, maintaining, and optimizing big data solutions, including data models and ETL processes, to meet client needs and support data-driven organizations.

📍 Location: Pune, IN (Remote/Hybrid)

💼 Career Level: Professional

🎯 Key Responsibilities

  • Design, build, optimize, and support new and existing data models and ETL processes based on client business requirements
  • Build, deploy, and manage data infrastructure to handle the needs of a rapidly growing data-driven organization
  • Coordinate data access and security to enable data scientists and analysts to easily access data

✅ Required Qualifications

  • 5+ years experience in Big Data with Hadoop, Spark, Scala, and Python
  • Experience in building scalable end-to-end data ingestion and processing solutions
  • Experience with object-oriented and/or functional programming languages such as Python, Java, and Scala

⭐ Preferred Qualifications

  • Experience with AWS services including S3, Athena, DynamoDB, Lambda
  • Experience with Jenkins and GIT
  • Developed Python and PySpark programs for data analysis
  • Good working experience with Python to develop Custom Framework for generating rules
  • Developed Python code to gather data from HBase and design solutions using PySpark
  • Understanding of DevOps

🛠️ Required Skills

  • Hadoop
  • Spark
  • Scala
  • Python
  • HBase
  • Hive
  • AWS
  • S3
  • Athena
  • DynamoDB
  • Lambda
  • Jenkins
  • GIT
  • PySpark
  • Apache Spark
  • DataFrames
  • RDDs
  • Hive Context
  • DevOps
  • Object-oriented programming
  • Functional programming
  • Java

🎁 Benefits & Perks

  • Opportunity to learn and develop career
  • Encouragement to be courageous and experiment daily
  • Continuous trust and support in an inclusive environment
  • Growth-minded culture with openness to feedback and learning
  • Collaborative team-focused approach
  • Equal-opportunity employment

Locations

  • Pune, IN, India (Remote)

Salary

Estimated Salary Rangemedium confidence

2,500,000 - 4,200,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Hadoopintermediate
  • Sparkintermediate
  • Scalaintermediate
  • Pythonintermediate
  • HBaseintermediate
  • Hiveintermediate
  • AWSintermediate
  • S3intermediate
  • Athenaintermediate
  • DynamoDBintermediate
  • Lambdaintermediate
  • Jenkinsintermediate
  • GITintermediate
  • PySparkintermediate
  • Apache Sparkintermediate
  • DataFramesintermediate
  • RDDsintermediate
  • Hive Contextintermediate
  • DevOpsintermediate
  • Object-oriented programmingintermediate
  • Functional programmingintermediate
  • Javaintermediate

Required Qualifications

  • 5+ years experience in Big Data with Hadoop, Spark, Scala, and Python (experience)
  • Experience in building scalable end-to-end data ingestion and processing solutions (experience)
  • Experience with object-oriented and/or functional programming languages such as Python, Java, and Scala (experience)

Preferred Qualifications

  • Experience with AWS services including S3, Athena, DynamoDB, Lambda (experience)
  • Experience with Jenkins and GIT (experience)
  • Developed Python and PySpark programs for data analysis (experience)
  • Good working experience with Python to develop Custom Framework for generating rules (experience)
  • Developed Python code to gather data from HBase and design solutions using PySpark (experience)
  • Understanding of DevOps (experience)

Responsibilities

  • Design, build, optimize, and support new and existing data models and ETL processes based on client business requirements
  • Build, deploy, and manage data infrastructure to handle the needs of a rapidly growing data-driven organization
  • Coordinate data access and security to enable data scientists and analysts to easily access data

Benefits

  • general: Opportunity to learn and develop career
  • general: Encouragement to be courageous and experiment daily
  • general: Continuous trust and support in an inclusive environment
  • general: Growth-minded culture with openness to feedback and learning
  • general: Collaborative team-focused approach
  • general: Equal-opportunity employment

Target Your Resume for "Data Engineer-Data Platforms" , IBM

Get personalized recommendations to optimize your resume specifically for Data Engineer-Data Platforms. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer-Data Platforms" , IBM

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Data & AnalyticsData & Analytics

Answer 10 quick questions to check your fit for Data Engineer-Data Platforms @ IBM.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

IBM logo

Data Engineer-Data Platforms

IBM

Software and Technology Jobs

Data Engineer-Data Platforms

full-timePosted: Dec 12, 2025

Job Description

Data Engineer-Data Platforms

📋 Job Overview

As a Data Engineer at IBM, you will work in our Consulting Client Innovation Centers to deliver technical expertise to clients globally. Your role involves developing, maintaining, and optimizing big data solutions, including data models and ETL processes, to meet client needs and support data-driven organizations.

📍 Location: Pune, IN (Remote/Hybrid)

💼 Career Level: Professional

🎯 Key Responsibilities

  • Design, build, optimize, and support new and existing data models and ETL processes based on client business requirements
  • Build, deploy, and manage data infrastructure to handle the needs of a rapidly growing data-driven organization
  • Coordinate data access and security to enable data scientists and analysts to easily access data

✅ Required Qualifications

  • 5+ years experience in Big Data with Hadoop, Spark, Scala, and Python
  • Experience in building scalable end-to-end data ingestion and processing solutions
  • Experience with object-oriented and/or functional programming languages such as Python, Java, and Scala

⭐ Preferred Qualifications

  • Experience with AWS services including S3, Athena, DynamoDB, Lambda
  • Experience with Jenkins and GIT
  • Developed Python and PySpark programs for data analysis
  • Good working experience with Python to develop Custom Framework for generating rules
  • Developed Python code to gather data from HBase and design solutions using PySpark
  • Understanding of DevOps

🛠️ Required Skills

  • Hadoop
  • Spark
  • Scala
  • Python
  • HBase
  • Hive
  • AWS
  • S3
  • Athena
  • DynamoDB
  • Lambda
  • Jenkins
  • GIT
  • PySpark
  • Apache Spark
  • DataFrames
  • RDDs
  • Hive Context
  • DevOps
  • Object-oriented programming
  • Functional programming
  • Java

🎁 Benefits & Perks

  • Opportunity to learn and develop career
  • Encouragement to be courageous and experiment daily
  • Continuous trust and support in an inclusive environment
  • Growth-minded culture with openness to feedback and learning
  • Collaborative team-focused approach
  • Equal-opportunity employment

Locations

  • Pune, IN, India (Remote)

Salary

Estimated Salary Rangemedium confidence

2,500,000 - 4,200,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Hadoopintermediate
  • Sparkintermediate
  • Scalaintermediate
  • Pythonintermediate
  • HBaseintermediate
  • Hiveintermediate
  • AWSintermediate
  • S3intermediate
  • Athenaintermediate
  • DynamoDBintermediate
  • Lambdaintermediate
  • Jenkinsintermediate
  • GITintermediate
  • PySparkintermediate
  • Apache Sparkintermediate
  • DataFramesintermediate
  • RDDsintermediate
  • Hive Contextintermediate
  • DevOpsintermediate
  • Object-oriented programmingintermediate
  • Functional programmingintermediate
  • Javaintermediate

Required Qualifications

  • 5+ years experience in Big Data with Hadoop, Spark, Scala, and Python (experience)
  • Experience in building scalable end-to-end data ingestion and processing solutions (experience)
  • Experience with object-oriented and/or functional programming languages such as Python, Java, and Scala (experience)

Preferred Qualifications

  • Experience with AWS services including S3, Athena, DynamoDB, Lambda (experience)
  • Experience with Jenkins and GIT (experience)
  • Developed Python and PySpark programs for data analysis (experience)
  • Good working experience with Python to develop Custom Framework for generating rules (experience)
  • Developed Python code to gather data from HBase and design solutions using PySpark (experience)
  • Understanding of DevOps (experience)

Responsibilities

  • Design, build, optimize, and support new and existing data models and ETL processes based on client business requirements
  • Build, deploy, and manage data infrastructure to handle the needs of a rapidly growing data-driven organization
  • Coordinate data access and security to enable data scientists and analysts to easily access data

Benefits

  • general: Opportunity to learn and develop career
  • general: Encouragement to be courageous and experiment daily
  • general: Continuous trust and support in an inclusive environment
  • general: Growth-minded culture with openness to feedback and learning
  • general: Collaborative team-focused approach
  • general: Equal-opportunity employment

Target Your Resume for "Data Engineer-Data Platforms" , IBM

Get personalized recommendations to optimize your resume specifically for Data Engineer-Data Platforms. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer-Data Platforms" , IBM

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Data & AnalyticsData & Analytics

Answer 10 quick questions to check your fit for Data Engineer-Data Platforms @ IBM.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.