Resume and JobRESUME AND JOB
Capgemini logo

Data Engineer

Capgemini

Software and Technology Jobs

Data Engineer

full-timePosted: Dec 8, 2025

Job Description

Data Engineer

📋 Job Overview

As an Azure Data Engineer at Capgemini Engineering, you will design, develop, and optimize ETL pipelines using Azure services to handle data extraction, transformation, and loading for innovative engineering projects. The role involves migrating data from on-premises to cloud environments, creating Databricks notebooks for complex transformations, and collaborating with software engineering teams to solve data-related challenges. Join a global team focused on cutting-edge R&D across industries, with opportunities for continuous learning and impactful work.

📍 Location: Bangalore

💼 Experience Level: Experienced Professionals

🏢 Business Unit: Engineering and RandD Services

🎯 Key Responsibilities

  • Design and implement ETL environments using Databricks, Spark, and Azure Data Factory
  • Set up and manage Azure Databricks workspaces and clusters for business analytics
  • Perform data extraction, handling schemas, corrupt records, and parallelized processing
  • Execute transformations and loads, including user-defined functions and join optimizations
  • Optimize and automate ETL processes for production
  • Copy data from on-premises SQL Server to Azure Data Lake Store using ADF V2
  • Migrate data using Azure Data Factory, creating pipelines and data flows
  • Migrate SQL databases to Azure services like Data Lake, Analytics, SQL Database, Databricks, and SQL Data Warehouse
  • Develop and execute transformation logic using Databricks notebooks
  • Design ETL pipelines using Databricks, Apache Spark, and ADF
  • Apply scientific methods to analyze and solve software engineering problems
  • Develop, maintain, and optimize software solutions
  • Supervise technical and administrative work of other engineers
  • Collaborate with stakeholders as a team player

✅ Required Qualifications

  • 5+ years of total IT experience
  • 4+ years of relevant experience as Azure Data Engineer
  • Hands-on experience on Azure Data Factory, Azure Databricks, Python, Azure Synapse, Azure Data Lake, PySpark, SQL Server, and Power BI
  • Knowledge in design of Extract, Transform and Load environment using Databricks, Spark, and Azure Data Factory (ADF)
  • Experience in setting up Microsoft Azure with Databricks, Databricks Workspace for Business Analytics, and managing clusters in Databricks
  • Hands-on experience in data extraction (extract, schemas, corrupt record handling, parallelized code), transformations and loads (user-defined functions, join optimizations), and production optimization and automation of ETL
  • Experience in copying data from on-premises SQL Server to Azure Data Lake Store (ADLS) using Azure Data Factory (ADF V2)
  • Experience in data migration using Azure Data Factory, creating pipelines and data flows
  • Hands-on experience on migrating SQL database to Azure Data Lake, Azure Data Lake Analytics, Azure SQL Database, Databricks, and Azure SQL Data Warehouse, including controlling and granting database access
  • Experience in working with Databricks notebooks for developing and executing transformation logic and data processing tasks
  • Extensive experience in designing ETL pipelines using Databricks, Apache Spark, and Azure Data Factory (ADF)
  • Works in the area of Software Engineering, encompassing development, maintenance, and optimization of software solutions/applications
  • Applies scientific methods to analyze and solve software engineering problems
  • Responsible for the development and application of software engineering practice and knowledge in research, design, development, and maintenance
  • Exercises original thought and judgement, and supervises technical and administrative work of other software engineers
  • Builds skills and expertise in software engineering discipline to meet standard expectations
  • Collaborates and acts as a team player with other software engineers and stakeholders

🛠️ Required Skills

  • Azure Data Factory
  • Azure Databricks
  • Python
  • Azure Synapse
  • Azure Data Lake
  • PySpark
  • SQL Server
  • Power BI
  • Spark
  • ADF V2
  • ADLS
  • Azure Data Lake Analytics
  • Azure SQL Database
  • Azure SQL Data Warehouse
  • Apache Spark
  • ETL pipelines
  • Data extraction
  • Schema management
  • Corrupt record handling
  • Parallelized processing
  • Transformations (join, merge, lookup, filter, remove duplicates, aggregation)
  • User-defined functions (UDFs)
  • Join optimizations
  • Databricks notebooks
  • Data cleaning
  • SQL troubleshooting
  • SQL Tools
  • Execution plans
  • Trace
  • Statistics
  • Index tuning wizard
  • Software engineering practices
  • Team collaboration

🎁 Benefits & Perks

  • Access to one of the industry's largest digital learning platforms with 250,000+ courses and numerous certifications
  • Inclusive environment where people of all backgrounds feel encouraged and have a sense of belonging
  • Opportunity to work on cutting-edge projects in tech and engineering with industry leaders
  • Work on solutions to overcome societal and environmental challenges
  • Green office campuses in India running on 100% renewable electricity
  • Installed solar plants across India locations and Battery Energy Storage Solution (BESS) in Noida and Mumbai campuses
  • Chance to make a difference every day

Locations

  • Bangalore, India

Salary

Estimated Salary Rangemedium confidence

2,500,000 - 4,200,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Azure Data Factoryintermediate
  • Azure Databricksintermediate
  • Pythonintermediate
  • Azure Synapseintermediate
  • Azure Data Lakeintermediate
  • PySparkintermediate
  • SQL Serverintermediate
  • Power BIintermediate
  • Sparkintermediate
  • ADF V2intermediate
  • ADLSintermediate
  • Azure Data Lake Analyticsintermediate
  • Azure SQL Databaseintermediate
  • Azure SQL Data Warehouseintermediate
  • Apache Sparkintermediate
  • ETL pipelinesintermediate
  • Data extractionintermediate
  • Schema managementintermediate
  • Corrupt record handlingintermediate
  • Parallelized processingintermediate
  • Transformations (join, merge, lookup, filter, remove duplicates, aggregation)intermediate
  • User-defined functions (UDFs)intermediate
  • Join optimizationsintermediate
  • Databricks notebooksintermediate
  • Data cleaningintermediate
  • SQL troubleshootingintermediate
  • SQL Toolsintermediate
  • Execution plansintermediate
  • Traceintermediate
  • Statisticsintermediate
  • Index tuning wizardintermediate
  • Software engineering practicesintermediate
  • Team collaborationintermediate

Required Qualifications

  • 5+ years of total IT experience (experience)
  • 4+ years of relevant experience as Azure Data Engineer (experience)
  • Hands-on experience on Azure Data Factory, Azure Databricks, Python, Azure Synapse, Azure Data Lake, PySpark, SQL Server, and Power BI (experience)
  • Knowledge in design of Extract, Transform and Load environment using Databricks, Spark, and Azure Data Factory (ADF) (experience)
  • Experience in setting up Microsoft Azure with Databricks, Databricks Workspace for Business Analytics, and managing clusters in Databricks (experience)
  • Hands-on experience in data extraction (extract, schemas, corrupt record handling, parallelized code), transformations and loads (user-defined functions, join optimizations), and production optimization and automation of ETL (experience)
  • Experience in copying data from on-premises SQL Server to Azure Data Lake Store (ADLS) using Azure Data Factory (ADF V2) (experience)
  • Experience in data migration using Azure Data Factory, creating pipelines and data flows (experience)
  • Hands-on experience on migrating SQL database to Azure Data Lake, Azure Data Lake Analytics, Azure SQL Database, Databricks, and Azure SQL Data Warehouse, including controlling and granting database access (experience)
  • Experience in working with Databricks notebooks for developing and executing transformation logic and data processing tasks (experience)
  • Extensive experience in designing ETL pipelines using Databricks, Apache Spark, and Azure Data Factory (ADF) (experience)
  • Works in the area of Software Engineering, encompassing development, maintenance, and optimization of software solutions/applications (experience)
  • Applies scientific methods to analyze and solve software engineering problems (experience)
  • Responsible for the development and application of software engineering practice and knowledge in research, design, development, and maintenance (experience)
  • Exercises original thought and judgement, and supervises technical and administrative work of other software engineers (experience)
  • Builds skills and expertise in software engineering discipline to meet standard expectations (experience)
  • Collaborates and acts as a team player with other software engineers and stakeholders (experience)

Responsibilities

  • Design and implement ETL environments using Databricks, Spark, and Azure Data Factory
  • Set up and manage Azure Databricks workspaces and clusters for business analytics
  • Perform data extraction, handling schemas, corrupt records, and parallelized processing
  • Execute transformations and loads, including user-defined functions and join optimizations
  • Optimize and automate ETL processes for production
  • Copy data from on-premises SQL Server to Azure Data Lake Store using ADF V2
  • Migrate data using Azure Data Factory, creating pipelines and data flows
  • Migrate SQL databases to Azure services like Data Lake, Analytics, SQL Database, Databricks, and SQL Data Warehouse
  • Develop and execute transformation logic using Databricks notebooks
  • Design ETL pipelines using Databricks, Apache Spark, and ADF
  • Apply scientific methods to analyze and solve software engineering problems
  • Develop, maintain, and optimize software solutions
  • Supervise technical and administrative work of other engineers
  • Collaborate with stakeholders as a team player

Benefits

  • general: Access to one of the industry's largest digital learning platforms with 250,000+ courses and numerous certifications
  • general: Inclusive environment where people of all backgrounds feel encouraged and have a sense of belonging
  • general: Opportunity to work on cutting-edge projects in tech and engineering with industry leaders
  • general: Work on solutions to overcome societal and environmental challenges
  • general: Green office campuses in India running on 100% renewable electricity
  • general: Installed solar plants across India locations and Battery Energy Storage Solution (BESS) in Noida and Mumbai campuses
  • general: Chance to make a difference every day

Target Your Resume for "Data Engineer" , Capgemini

Get personalized recommendations to optimize your resume specifically for Data Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer" , Capgemini

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Engineering and RandD ServicesSoftware EngineeringExperienced ProfessionalsEngineering and RandD Services

Answer 10 quick questions to check your fit for Data Engineer @ Capgemini.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

Capgemini logo

Data Engineer

Capgemini

Software and Technology Jobs

Data Engineer

full-timePosted: Dec 8, 2025

Job Description

Data Engineer

📋 Job Overview

As an Azure Data Engineer at Capgemini Engineering, you will design, develop, and optimize ETL pipelines using Azure services to handle data extraction, transformation, and loading for innovative engineering projects. The role involves migrating data from on-premises to cloud environments, creating Databricks notebooks for complex transformations, and collaborating with software engineering teams to solve data-related challenges. Join a global team focused on cutting-edge R&D across industries, with opportunities for continuous learning and impactful work.

📍 Location: Bangalore

💼 Experience Level: Experienced Professionals

🏢 Business Unit: Engineering and RandD Services

🎯 Key Responsibilities

  • Design and implement ETL environments using Databricks, Spark, and Azure Data Factory
  • Set up and manage Azure Databricks workspaces and clusters for business analytics
  • Perform data extraction, handling schemas, corrupt records, and parallelized processing
  • Execute transformations and loads, including user-defined functions and join optimizations
  • Optimize and automate ETL processes for production
  • Copy data from on-premises SQL Server to Azure Data Lake Store using ADF V2
  • Migrate data using Azure Data Factory, creating pipelines and data flows
  • Migrate SQL databases to Azure services like Data Lake, Analytics, SQL Database, Databricks, and SQL Data Warehouse
  • Develop and execute transformation logic using Databricks notebooks
  • Design ETL pipelines using Databricks, Apache Spark, and ADF
  • Apply scientific methods to analyze and solve software engineering problems
  • Develop, maintain, and optimize software solutions
  • Supervise technical and administrative work of other engineers
  • Collaborate with stakeholders as a team player

✅ Required Qualifications

  • 5+ years of total IT experience
  • 4+ years of relevant experience as Azure Data Engineer
  • Hands-on experience on Azure Data Factory, Azure Databricks, Python, Azure Synapse, Azure Data Lake, PySpark, SQL Server, and Power BI
  • Knowledge in design of Extract, Transform and Load environment using Databricks, Spark, and Azure Data Factory (ADF)
  • Experience in setting up Microsoft Azure with Databricks, Databricks Workspace for Business Analytics, and managing clusters in Databricks
  • Hands-on experience in data extraction (extract, schemas, corrupt record handling, parallelized code), transformations and loads (user-defined functions, join optimizations), and production optimization and automation of ETL
  • Experience in copying data from on-premises SQL Server to Azure Data Lake Store (ADLS) using Azure Data Factory (ADF V2)
  • Experience in data migration using Azure Data Factory, creating pipelines and data flows
  • Hands-on experience on migrating SQL database to Azure Data Lake, Azure Data Lake Analytics, Azure SQL Database, Databricks, and Azure SQL Data Warehouse, including controlling and granting database access
  • Experience in working with Databricks notebooks for developing and executing transformation logic and data processing tasks
  • Extensive experience in designing ETL pipelines using Databricks, Apache Spark, and Azure Data Factory (ADF)
  • Works in the area of Software Engineering, encompassing development, maintenance, and optimization of software solutions/applications
  • Applies scientific methods to analyze and solve software engineering problems
  • Responsible for the development and application of software engineering practice and knowledge in research, design, development, and maintenance
  • Exercises original thought and judgement, and supervises technical and administrative work of other software engineers
  • Builds skills and expertise in software engineering discipline to meet standard expectations
  • Collaborates and acts as a team player with other software engineers and stakeholders

🛠️ Required Skills

  • Azure Data Factory
  • Azure Databricks
  • Python
  • Azure Synapse
  • Azure Data Lake
  • PySpark
  • SQL Server
  • Power BI
  • Spark
  • ADF V2
  • ADLS
  • Azure Data Lake Analytics
  • Azure SQL Database
  • Azure SQL Data Warehouse
  • Apache Spark
  • ETL pipelines
  • Data extraction
  • Schema management
  • Corrupt record handling
  • Parallelized processing
  • Transformations (join, merge, lookup, filter, remove duplicates, aggregation)
  • User-defined functions (UDFs)
  • Join optimizations
  • Databricks notebooks
  • Data cleaning
  • SQL troubleshooting
  • SQL Tools
  • Execution plans
  • Trace
  • Statistics
  • Index tuning wizard
  • Software engineering practices
  • Team collaboration

🎁 Benefits & Perks

  • Access to one of the industry's largest digital learning platforms with 250,000+ courses and numerous certifications
  • Inclusive environment where people of all backgrounds feel encouraged and have a sense of belonging
  • Opportunity to work on cutting-edge projects in tech and engineering with industry leaders
  • Work on solutions to overcome societal and environmental challenges
  • Green office campuses in India running on 100% renewable electricity
  • Installed solar plants across India locations and Battery Energy Storage Solution (BESS) in Noida and Mumbai campuses
  • Chance to make a difference every day

Locations

  • Bangalore, India

Salary

Estimated Salary Rangemedium confidence

2,500,000 - 4,200,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Azure Data Factoryintermediate
  • Azure Databricksintermediate
  • Pythonintermediate
  • Azure Synapseintermediate
  • Azure Data Lakeintermediate
  • PySparkintermediate
  • SQL Serverintermediate
  • Power BIintermediate
  • Sparkintermediate
  • ADF V2intermediate
  • ADLSintermediate
  • Azure Data Lake Analyticsintermediate
  • Azure SQL Databaseintermediate
  • Azure SQL Data Warehouseintermediate
  • Apache Sparkintermediate
  • ETL pipelinesintermediate
  • Data extractionintermediate
  • Schema managementintermediate
  • Corrupt record handlingintermediate
  • Parallelized processingintermediate
  • Transformations (join, merge, lookup, filter, remove duplicates, aggregation)intermediate
  • User-defined functions (UDFs)intermediate
  • Join optimizationsintermediate
  • Databricks notebooksintermediate
  • Data cleaningintermediate
  • SQL troubleshootingintermediate
  • SQL Toolsintermediate
  • Execution plansintermediate
  • Traceintermediate
  • Statisticsintermediate
  • Index tuning wizardintermediate
  • Software engineering practicesintermediate
  • Team collaborationintermediate

Required Qualifications

  • 5+ years of total IT experience (experience)
  • 4+ years of relevant experience as Azure Data Engineer (experience)
  • Hands-on experience on Azure Data Factory, Azure Databricks, Python, Azure Synapse, Azure Data Lake, PySpark, SQL Server, and Power BI (experience)
  • Knowledge in design of Extract, Transform and Load environment using Databricks, Spark, and Azure Data Factory (ADF) (experience)
  • Experience in setting up Microsoft Azure with Databricks, Databricks Workspace for Business Analytics, and managing clusters in Databricks (experience)
  • Hands-on experience in data extraction (extract, schemas, corrupt record handling, parallelized code), transformations and loads (user-defined functions, join optimizations), and production optimization and automation of ETL (experience)
  • Experience in copying data from on-premises SQL Server to Azure Data Lake Store (ADLS) using Azure Data Factory (ADF V2) (experience)
  • Experience in data migration using Azure Data Factory, creating pipelines and data flows (experience)
  • Hands-on experience on migrating SQL database to Azure Data Lake, Azure Data Lake Analytics, Azure SQL Database, Databricks, and Azure SQL Data Warehouse, including controlling and granting database access (experience)
  • Experience in working with Databricks notebooks for developing and executing transformation logic and data processing tasks (experience)
  • Extensive experience in designing ETL pipelines using Databricks, Apache Spark, and Azure Data Factory (ADF) (experience)
  • Works in the area of Software Engineering, encompassing development, maintenance, and optimization of software solutions/applications (experience)
  • Applies scientific methods to analyze and solve software engineering problems (experience)
  • Responsible for the development and application of software engineering practice and knowledge in research, design, development, and maintenance (experience)
  • Exercises original thought and judgement, and supervises technical and administrative work of other software engineers (experience)
  • Builds skills and expertise in software engineering discipline to meet standard expectations (experience)
  • Collaborates and acts as a team player with other software engineers and stakeholders (experience)

Responsibilities

  • Design and implement ETL environments using Databricks, Spark, and Azure Data Factory
  • Set up and manage Azure Databricks workspaces and clusters for business analytics
  • Perform data extraction, handling schemas, corrupt records, and parallelized processing
  • Execute transformations and loads, including user-defined functions and join optimizations
  • Optimize and automate ETL processes for production
  • Copy data from on-premises SQL Server to Azure Data Lake Store using ADF V2
  • Migrate data using Azure Data Factory, creating pipelines and data flows
  • Migrate SQL databases to Azure services like Data Lake, Analytics, SQL Database, Databricks, and SQL Data Warehouse
  • Develop and execute transformation logic using Databricks notebooks
  • Design ETL pipelines using Databricks, Apache Spark, and ADF
  • Apply scientific methods to analyze and solve software engineering problems
  • Develop, maintain, and optimize software solutions
  • Supervise technical and administrative work of other engineers
  • Collaborate with stakeholders as a team player

Benefits

  • general: Access to one of the industry's largest digital learning platforms with 250,000+ courses and numerous certifications
  • general: Inclusive environment where people of all backgrounds feel encouraged and have a sense of belonging
  • general: Opportunity to work on cutting-edge projects in tech and engineering with industry leaders
  • general: Work on solutions to overcome societal and environmental challenges
  • general: Green office campuses in India running on 100% renewable electricity
  • general: Installed solar plants across India locations and Battery Energy Storage Solution (BESS) in Noida and Mumbai campuses
  • general: Chance to make a difference every day

Target Your Resume for "Data Engineer" , Capgemini

Get personalized recommendations to optimize your resume specifically for Data Engineer. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer" , Capgemini

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Engineering and RandD ServicesSoftware EngineeringExperienced ProfessionalsEngineering and RandD Services

Answer 10 quick questions to check your fit for Data Engineer @ Capgemini.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.