Resume and JobRESUME AND JOB
IBM logo

Data Engineer-Data Modeling

IBM

Software and Technology Jobs

Data Engineer-Data Modeling

full-timePosted: Dec 11, 2025

Job Description

Data Engineer-Data Modeling

📋 Job Overview

As a Data Engineer-Data Modeling at IBM Consulting, you will work in our Client Innovation Centers to develop, maintain, and optimize data pipelines, ensuring quality and efficiency across various systems. You will collaborate with engineering, analysis, and product teams to enhance data solutions and practices, contributing to the digital transformation of global clients using agile methodologies and AI-driven workflows.

📍 Location: NO City, BR (Remote/Hybrid)

💼 Career Level: Entry Level

🎯 Key Responsibilities

  • Develop, maintain, and optimize ETL pipelines ensuring reliability, scalability, and business requirement adherence
  • Manipulate, transform, and analyze data using Python, PySpark, and Pandas with good performance and code organization practices
  • Create and query databases with SQL, implementing efficient and secure queries
  • Participate in transactional and multidimensional data modeling, contributing to the structuring of data lakes, data warehouses, and analytical systems
  • Assist in the integration, ingestion, and organization of structured and unstructured data, collaborating on robust and flexible solutions
  • Support the definition and implementation of software architecture best practices related to data pipelines and systems
  • Use GitHub for code versioning, branch tracking, pull requests, and technical documentation
  • Collaborate on the development and planning of tests for pipeline validation and data quality
  • Monitor data pipelines and environments, contributing to incident prevention and resolution
  • Work collaboratively with analysts, engineers, and stakeholders to ensure efficient and secure data usage

✅ Required Qualifications

  • Solid knowledge in Python, PySpark, Pandas, SQL
  • Experience in transactional and multidimensional data modeling
  • Knowledge in software architecture
  • Experience with GitHub for code versioning, branch management, pull requests, and technical documentation
  • Experience in development and planning of tests
  • Knowledge of unstructured databases

🛠️ Required Skills

  • Python
  • PySpark
  • Pandas
  • SQL
  • Data modeling
  • Software architecture
  • GitHub
  • Test development and planning
  • Unstructured databases
  • ETL pipelines
  • Data integration
  • Data ingestion
  • Data organization
  • Collaboration
  • Agile methodologies
  • AI-driven workflows

Locations

  • NO City, BR, India (Remote)

Salary

Estimated Salary Rangemedium confidence

600,000 - 1,200,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Pythonintermediate
  • PySparkintermediate
  • Pandasintermediate
  • SQLintermediate
  • Data modelingintermediate
  • Software architectureintermediate
  • GitHubintermediate
  • Test development and planningintermediate
  • Unstructured databasesintermediate
  • ETL pipelinesintermediate
  • Data integrationintermediate
  • Data ingestionintermediate
  • Data organizationintermediate
  • Collaborationintermediate
  • Agile methodologiesintermediate
  • AI-driven workflowsintermediate

Required Qualifications

  • Solid knowledge in Python, PySpark, Pandas, SQL (experience)
  • Experience in transactional and multidimensional data modeling (experience)
  • Knowledge in software architecture (experience)
  • Experience with GitHub for code versioning, branch management, pull requests, and technical documentation (experience)
  • Experience in development and planning of tests (experience)
  • Knowledge of unstructured databases (experience)

Responsibilities

  • Develop, maintain, and optimize ETL pipelines ensuring reliability, scalability, and business requirement adherence
  • Manipulate, transform, and analyze data using Python, PySpark, and Pandas with good performance and code organization practices
  • Create and query databases with SQL, implementing efficient and secure queries
  • Participate in transactional and multidimensional data modeling, contributing to the structuring of data lakes, data warehouses, and analytical systems
  • Assist in the integration, ingestion, and organization of structured and unstructured data, collaborating on robust and flexible solutions
  • Support the definition and implementation of software architecture best practices related to data pipelines and systems
  • Use GitHub for code versioning, branch tracking, pull requests, and technical documentation
  • Collaborate on the development and planning of tests for pipeline validation and data quality
  • Monitor data pipelines and environments, contributing to incident prevention and resolution
  • Work collaboratively with analysts, engineers, and stakeholders to ensure efficient and secure data usage

Target Your Resume for "Data Engineer-Data Modeling" , IBM

Get personalized recommendations to optimize your resume specifically for Data Engineer-Data Modeling. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer-Data Modeling" , IBM

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Data & AnalyticsData & Analytics

Answer 10 quick questions to check your fit for Data Engineer-Data Modeling @ IBM.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

IBM logo

Data Engineer-Data Modeling

IBM

Software and Technology Jobs

Data Engineer-Data Modeling

full-timePosted: Dec 11, 2025

Job Description

Data Engineer-Data Modeling

📋 Job Overview

As a Data Engineer-Data Modeling at IBM Consulting, you will work in our Client Innovation Centers to develop, maintain, and optimize data pipelines, ensuring quality and efficiency across various systems. You will collaborate with engineering, analysis, and product teams to enhance data solutions and practices, contributing to the digital transformation of global clients using agile methodologies and AI-driven workflows.

📍 Location: NO City, BR (Remote/Hybrid)

💼 Career Level: Entry Level

🎯 Key Responsibilities

  • Develop, maintain, and optimize ETL pipelines ensuring reliability, scalability, and business requirement adherence
  • Manipulate, transform, and analyze data using Python, PySpark, and Pandas with good performance and code organization practices
  • Create and query databases with SQL, implementing efficient and secure queries
  • Participate in transactional and multidimensional data modeling, contributing to the structuring of data lakes, data warehouses, and analytical systems
  • Assist in the integration, ingestion, and organization of structured and unstructured data, collaborating on robust and flexible solutions
  • Support the definition and implementation of software architecture best practices related to data pipelines and systems
  • Use GitHub for code versioning, branch tracking, pull requests, and technical documentation
  • Collaborate on the development and planning of tests for pipeline validation and data quality
  • Monitor data pipelines and environments, contributing to incident prevention and resolution
  • Work collaboratively with analysts, engineers, and stakeholders to ensure efficient and secure data usage

✅ Required Qualifications

  • Solid knowledge in Python, PySpark, Pandas, SQL
  • Experience in transactional and multidimensional data modeling
  • Knowledge in software architecture
  • Experience with GitHub for code versioning, branch management, pull requests, and technical documentation
  • Experience in development and planning of tests
  • Knowledge of unstructured databases

🛠️ Required Skills

  • Python
  • PySpark
  • Pandas
  • SQL
  • Data modeling
  • Software architecture
  • GitHub
  • Test development and planning
  • Unstructured databases
  • ETL pipelines
  • Data integration
  • Data ingestion
  • Data organization
  • Collaboration
  • Agile methodologies
  • AI-driven workflows

Locations

  • NO City, BR, India (Remote)

Salary

Estimated Salary Rangemedium confidence

600,000 - 1,200,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Pythonintermediate
  • PySparkintermediate
  • Pandasintermediate
  • SQLintermediate
  • Data modelingintermediate
  • Software architectureintermediate
  • GitHubintermediate
  • Test development and planningintermediate
  • Unstructured databasesintermediate
  • ETL pipelinesintermediate
  • Data integrationintermediate
  • Data ingestionintermediate
  • Data organizationintermediate
  • Collaborationintermediate
  • Agile methodologiesintermediate
  • AI-driven workflowsintermediate

Required Qualifications

  • Solid knowledge in Python, PySpark, Pandas, SQL (experience)
  • Experience in transactional and multidimensional data modeling (experience)
  • Knowledge in software architecture (experience)
  • Experience with GitHub for code versioning, branch management, pull requests, and technical documentation (experience)
  • Experience in development and planning of tests (experience)
  • Knowledge of unstructured databases (experience)

Responsibilities

  • Develop, maintain, and optimize ETL pipelines ensuring reliability, scalability, and business requirement adherence
  • Manipulate, transform, and analyze data using Python, PySpark, and Pandas with good performance and code organization practices
  • Create and query databases with SQL, implementing efficient and secure queries
  • Participate in transactional and multidimensional data modeling, contributing to the structuring of data lakes, data warehouses, and analytical systems
  • Assist in the integration, ingestion, and organization of structured and unstructured data, collaborating on robust and flexible solutions
  • Support the definition and implementation of software architecture best practices related to data pipelines and systems
  • Use GitHub for code versioning, branch tracking, pull requests, and technical documentation
  • Collaborate on the development and planning of tests for pipeline validation and data quality
  • Monitor data pipelines and environments, contributing to incident prevention and resolution
  • Work collaboratively with analysts, engineers, and stakeholders to ensure efficient and secure data usage

Target Your Resume for "Data Engineer-Data Modeling" , IBM

Get personalized recommendations to optimize your resume specifically for Data Engineer-Data Modeling. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer-Data Modeling" , IBM

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Data & AnalyticsData & Analytics

Answer 10 quick questions to check your fit for Data Engineer-Data Modeling @ IBM.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.