Resume and JobRESUME AND JOB
Capgemini logo

AWS Data Engineer

Capgemini

Software and Technology Jobs

AWS Data Engineer

full-timePosted: Nov 19, 2025

Job Description

AWS Data Engineer

📋 Job Overview

The AWS Data Engineer role at Capgemini Invent involves designing and implementing scalable data pipelines on AWS to support advanced analytics and ML solutions. The position requires collaboration with data teams to ensure data quality, governance, and security while optimizing ETL processes and driving automation. With 8-14 years of experience, the role focuses on building resilient big data ecosystems using AWS services and related technologies.

📍 Location: Bangalore

💼 Experience Level: Experienced Professionals

🏢 Business Unit: INVENT

🎯 Key Responsibilities

  • Design and implement scalable, secure, and high-performance data pipelines on AWS Cloud
  • Collaborate with data scientists, analysts, and application teams to deliver advanced analytics and ML-ready data solutions
  • Ensure data quality, governance, and security across all ingestion, transformation, and storage layers
  • Optimize ETL workflows, SQL queries, and big data processing for efficiency and reliability
  • Drive automation and CI/CD practices for seamless deployment and maintenance of data solutions
  • Monitor and troubleshoot data pipeline performance, ensuring accuracy and resilience

✅ Required Qualifications

  • 8–14 years of experience in AWS-based data engineering and big data ecosystems
  • Hands-on expertise in AWS services: Kinesis, EMR, Glue, RDS, Athena, RedShift, Lambda, and S3
  • Strong programming skills in Python and PySpark, with advanced SQL optimization capabilities
  • Proven experience with big data tools (Hadoop, Spark, Kafka) and data lake/data warehouse architectures
  • Familiarity with relational (SQL Server, MySQL, PostgreSQL, Oracle) and NoSQL databases (Cassandra, MongoDB)

🛠️ Required Skills

  • AWS services: Kinesis, EMR, Glue, RDS, Athena, RedShift, Lambda, S3
  • Programming: Python, PySpark
  • SQL optimization
  • Big data tools: Hadoop, Spark, Kafka
  • Data architectures: data lake, data warehouse
  • Relational databases: SQL Server, MySQL, PostgreSQL, Oracle
  • NoSQL databases: Cassandra, MongoDB
  • ETL workflows
  • CI/CD practices
  • Data governance and security

Locations

  • Bangalore, India

Salary

Estimated Salary Rangemedium confidence

2,500,000 - 4,200,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • AWS services: Kinesis, EMR, Glue, RDS, Athena, RedShift, Lambda, S3intermediate
  • Programming: Python, PySparkintermediate
  • SQL optimizationintermediate
  • Big data tools: Hadoop, Spark, Kafkaintermediate
  • Data architectures: data lake, data warehouseintermediate
  • Relational databases: SQL Server, MySQL, PostgreSQL, Oracleintermediate
  • NoSQL databases: Cassandra, MongoDBintermediate
  • ETL workflowsintermediate
  • CI/CD practicesintermediate
  • Data governance and securityintermediate

Required Qualifications

  • 8–14 years of experience in AWS-based data engineering and big data ecosystems (experience)
  • Hands-on expertise in AWS services: Kinesis, EMR, Glue, RDS, Athena, RedShift, Lambda, and S3 (experience)
  • Strong programming skills in Python and PySpark, with advanced SQL optimization capabilities (experience)
  • Proven experience with big data tools (Hadoop, Spark, Kafka) and data lake/data warehouse architectures (experience)
  • Familiarity with relational (SQL Server, MySQL, PostgreSQL, Oracle) and NoSQL databases (Cassandra, MongoDB) (experience)

Responsibilities

  • Design and implement scalable, secure, and high-performance data pipelines on AWS Cloud
  • Collaborate with data scientists, analysts, and application teams to deliver advanced analytics and ML-ready data solutions
  • Ensure data quality, governance, and security across all ingestion, transformation, and storage layers
  • Optimize ETL workflows, SQL queries, and big data processing for efficiency and reliability
  • Drive automation and CI/CD practices for seamless deployment and maintenance of data solutions
  • Monitor and troubleshoot data pipeline performance, ensuring accuracy and resilience

Target Your Resume for "AWS Data Engineer" , Capgemini

Get personalized recommendations to optimize your resume specifically for AWS Data Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "AWS Data Engineer" , Capgemini

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

INVENTData & AIExperienced ProfessionalsINVENT

Answer 10 quick questions to check your fit for AWS Data Engineer @ Capgemini.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

Capgemini logo

AWS Data Engineer

Capgemini

Software and Technology Jobs

AWS Data Engineer

full-timePosted: Nov 19, 2025

Job Description

AWS Data Engineer

📋 Job Overview

The AWS Data Engineer role at Capgemini Invent involves designing and implementing scalable data pipelines on AWS to support advanced analytics and ML solutions. The position requires collaboration with data teams to ensure data quality, governance, and security while optimizing ETL processes and driving automation. With 8-14 years of experience, the role focuses on building resilient big data ecosystems using AWS services and related technologies.

📍 Location: Bangalore

💼 Experience Level: Experienced Professionals

🏢 Business Unit: INVENT

🎯 Key Responsibilities

  • Design and implement scalable, secure, and high-performance data pipelines on AWS Cloud
  • Collaborate with data scientists, analysts, and application teams to deliver advanced analytics and ML-ready data solutions
  • Ensure data quality, governance, and security across all ingestion, transformation, and storage layers
  • Optimize ETL workflows, SQL queries, and big data processing for efficiency and reliability
  • Drive automation and CI/CD practices for seamless deployment and maintenance of data solutions
  • Monitor and troubleshoot data pipeline performance, ensuring accuracy and resilience

✅ Required Qualifications

  • 8–14 years of experience in AWS-based data engineering and big data ecosystems
  • Hands-on expertise in AWS services: Kinesis, EMR, Glue, RDS, Athena, RedShift, Lambda, and S3
  • Strong programming skills in Python and PySpark, with advanced SQL optimization capabilities
  • Proven experience with big data tools (Hadoop, Spark, Kafka) and data lake/data warehouse architectures
  • Familiarity with relational (SQL Server, MySQL, PostgreSQL, Oracle) and NoSQL databases (Cassandra, MongoDB)

🛠️ Required Skills

  • AWS services: Kinesis, EMR, Glue, RDS, Athena, RedShift, Lambda, S3
  • Programming: Python, PySpark
  • SQL optimization
  • Big data tools: Hadoop, Spark, Kafka
  • Data architectures: data lake, data warehouse
  • Relational databases: SQL Server, MySQL, PostgreSQL, Oracle
  • NoSQL databases: Cassandra, MongoDB
  • ETL workflows
  • CI/CD practices
  • Data governance and security

Locations

  • Bangalore, India

Salary

Estimated Salary Rangemedium confidence

2,500,000 - 4,200,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • AWS services: Kinesis, EMR, Glue, RDS, Athena, RedShift, Lambda, S3intermediate
  • Programming: Python, PySparkintermediate
  • SQL optimizationintermediate
  • Big data tools: Hadoop, Spark, Kafkaintermediate
  • Data architectures: data lake, data warehouseintermediate
  • Relational databases: SQL Server, MySQL, PostgreSQL, Oracleintermediate
  • NoSQL databases: Cassandra, MongoDBintermediate
  • ETL workflowsintermediate
  • CI/CD practicesintermediate
  • Data governance and securityintermediate

Required Qualifications

  • 8–14 years of experience in AWS-based data engineering and big data ecosystems (experience)
  • Hands-on expertise in AWS services: Kinesis, EMR, Glue, RDS, Athena, RedShift, Lambda, and S3 (experience)
  • Strong programming skills in Python and PySpark, with advanced SQL optimization capabilities (experience)
  • Proven experience with big data tools (Hadoop, Spark, Kafka) and data lake/data warehouse architectures (experience)
  • Familiarity with relational (SQL Server, MySQL, PostgreSQL, Oracle) and NoSQL databases (Cassandra, MongoDB) (experience)

Responsibilities

  • Design and implement scalable, secure, and high-performance data pipelines on AWS Cloud
  • Collaborate with data scientists, analysts, and application teams to deliver advanced analytics and ML-ready data solutions
  • Ensure data quality, governance, and security across all ingestion, transformation, and storage layers
  • Optimize ETL workflows, SQL queries, and big data processing for efficiency and reliability
  • Drive automation and CI/CD practices for seamless deployment and maintenance of data solutions
  • Monitor and troubleshoot data pipeline performance, ensuring accuracy and resilience

Target Your Resume for "AWS Data Engineer" , Capgemini

Get personalized recommendations to optimize your resume specifically for AWS Data Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "AWS Data Engineer" , Capgemini

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

INVENTData & AIExperienced ProfessionalsINVENT

Answer 10 quick questions to check your fit for AWS Data Engineer @ Capgemini.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.