Resume and JobRESUME AND JOB
IBM logo

Data Engineer - BI

IBM

Software and Technology Jobs

Data Engineer - BI

full-timePosted: Dec 11, 2025

Job Description

Data Engineer - BI

📋 Job Overview

In this Data Engineer - BI role at IBM, you will work in one of our Client Innovation Centers, delivering technical and industry expertise to a wide range of clients globally. You will be responsible for data engineering tasks using modern tools like Databricks, Delta Lake, and Unity Catalog, focusing on transforming data into value through efficient pipelines, data governance, and strategic visualization.

📍 Location: NO City, BR (Remote/Hybrid)

💼 Career Level: Professional

🎯 Key Responsibilities

  • Work in agile teams using frameworks like Scrum and Kanban
  • Use agile project management tools such as Jira, Trello or similar
  • Communicate clearly and objectively, with negotiation and conflict resolution skills
  • Collaborate actively with technical and business stakeholders
  • Create and optimize Databricks notebooks for data manipulation and analysis
  • Use Databricks SQL for efficient queries and report building
  • Configure and execute Databricks Workflows for process automation
  • Manipulate data with Unity Catalog ensuring governance and security
  • Utilize Delta Lake for building reliable data lakes with versioning and quality control
  • Implement scalable and resilient data pipelines
  • Integrate with data monitoring, storage, and visualization tools
  • Analyze critically and solve complex problems
  • Focus on continuous improvement, operational efficiency, and value delivery
  • Work in environments with large data volumes and high performance demands

✅ Required Qualifications

  • Solid experience in data engineering
  • Proficiency in Python
  • Knowledge of relational databases
  • SQL query language proficiency
  • Big Data and Distributed Processing with Apache Spark and PySpark
  • Practical knowledge of Git/GitLab for code versioning, collaboration, and CI/CD workflows
  • Practical knowledge in Databricks

⭐ Preferred Qualifications

  • Understanding of NoSQL databases with practical experience in MongoDB
  • Knowledge of Google Analytics and other data analysis tools for data extraction and processing
  • Knowledge of dimensional modeling (facts and dimensions)

🛠️ Required Skills

  • Databricks
  • Delta Lake
  • Unity Catalog
  • Scrum
  • Kanban
  • Jira
  • Trello
  • Communication
  • Negotiation
  • Conflict resolution
  • Collaboration
  • Databricks SQL
  • Databricks Workflows
  • Apache Spark
  • PySpark
  • Git
  • GitLab
  • CI/CD
  • NoSQL
  • MongoDB
  • Google Analytics
  • Dimensional modeling

🎁 Benefits & Perks

  • Opportunity to learn and develop career
  • Encouragement to be courageous and experiment daily
  • Continuous trust and support in a thriving environment
  • Growth-minded culture with openness to feedback and learning
  • Opportunity to provide feedback and collaborate with colleagues
  • Equal-opportunity employment
  • Eligibility for people with disabilities or rehabilitated

Locations

  • NO City, BR, India (Remote)

Salary

Estimated Salary Rangemedium confidence

2,500,000 - 4,200,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Databricksintermediate
  • Delta Lakeintermediate
  • Unity Catalogintermediate
  • Scrumintermediate
  • Kanbanintermediate
  • Jiraintermediate
  • Trellointermediate
  • Communicationintermediate
  • Negotiationintermediate
  • Conflict resolutionintermediate
  • Collaborationintermediate
  • Databricks SQLintermediate
  • Databricks Workflowsintermediate
  • Apache Sparkintermediate
  • PySparkintermediate
  • Gitintermediate
  • GitLabintermediate
  • CI/CDintermediate
  • NoSQLintermediate
  • MongoDBintermediate
  • Google Analyticsintermediate
  • Dimensional modelingintermediate

Required Qualifications

  • Solid experience in data engineering (experience)
  • Proficiency in Python (experience)
  • Knowledge of relational databases (experience)
  • SQL query language proficiency (experience)
  • Big Data and Distributed Processing with Apache Spark and PySpark (experience)
  • Practical knowledge of Git/GitLab for code versioning, collaboration, and CI/CD workflows (experience)
  • Practical knowledge in Databricks (experience)

Preferred Qualifications

  • Understanding of NoSQL databases with practical experience in MongoDB (experience)
  • Knowledge of Google Analytics and other data analysis tools for data extraction and processing (experience)
  • Knowledge of dimensional modeling (facts and dimensions) (experience)

Responsibilities

  • Work in agile teams using frameworks like Scrum and Kanban
  • Use agile project management tools such as Jira, Trello or similar
  • Communicate clearly and objectively, with negotiation and conflict resolution skills
  • Collaborate actively with technical and business stakeholders
  • Create and optimize Databricks notebooks for data manipulation and analysis
  • Use Databricks SQL for efficient queries and report building
  • Configure and execute Databricks Workflows for process automation
  • Manipulate data with Unity Catalog ensuring governance and security
  • Utilize Delta Lake for building reliable data lakes with versioning and quality control
  • Implement scalable and resilient data pipelines
  • Integrate with data monitoring, storage, and visualization tools
  • Analyze critically and solve complex problems
  • Focus on continuous improvement, operational efficiency, and value delivery
  • Work in environments with large data volumes and high performance demands

Benefits

  • general: Opportunity to learn and develop career
  • general: Encouragement to be courageous and experiment daily
  • general: Continuous trust and support in a thriving environment
  • general: Growth-minded culture with openness to feedback and learning
  • general: Opportunity to provide feedback and collaborate with colleagues
  • general: Equal-opportunity employment
  • general: Eligibility for people with disabilities or rehabilitated

Target Your Resume for "Data Engineer - BI" , IBM

Get personalized recommendations to optimize your resume specifically for Data Engineer - BI. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer - BI" , IBM

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Data & AnalyticsData & Analytics

Answer 10 quick questions to check your fit for Data Engineer - BI @ IBM.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

IBM logo

Data Engineer - BI

IBM

Software and Technology Jobs

Data Engineer - BI

full-timePosted: Dec 11, 2025

Job Description

Data Engineer - BI

📋 Job Overview

In this Data Engineer - BI role at IBM, you will work in one of our Client Innovation Centers, delivering technical and industry expertise to a wide range of clients globally. You will be responsible for data engineering tasks using modern tools like Databricks, Delta Lake, and Unity Catalog, focusing on transforming data into value through efficient pipelines, data governance, and strategic visualization.

📍 Location: NO City, BR (Remote/Hybrid)

💼 Career Level: Professional

🎯 Key Responsibilities

  • Work in agile teams using frameworks like Scrum and Kanban
  • Use agile project management tools such as Jira, Trello or similar
  • Communicate clearly and objectively, with negotiation and conflict resolution skills
  • Collaborate actively with technical and business stakeholders
  • Create and optimize Databricks notebooks for data manipulation and analysis
  • Use Databricks SQL for efficient queries and report building
  • Configure and execute Databricks Workflows for process automation
  • Manipulate data with Unity Catalog ensuring governance and security
  • Utilize Delta Lake for building reliable data lakes with versioning and quality control
  • Implement scalable and resilient data pipelines
  • Integrate with data monitoring, storage, and visualization tools
  • Analyze critically and solve complex problems
  • Focus on continuous improvement, operational efficiency, and value delivery
  • Work in environments with large data volumes and high performance demands

✅ Required Qualifications

  • Solid experience in data engineering
  • Proficiency in Python
  • Knowledge of relational databases
  • SQL query language proficiency
  • Big Data and Distributed Processing with Apache Spark and PySpark
  • Practical knowledge of Git/GitLab for code versioning, collaboration, and CI/CD workflows
  • Practical knowledge in Databricks

⭐ Preferred Qualifications

  • Understanding of NoSQL databases with practical experience in MongoDB
  • Knowledge of Google Analytics and other data analysis tools for data extraction and processing
  • Knowledge of dimensional modeling (facts and dimensions)

🛠️ Required Skills

  • Databricks
  • Delta Lake
  • Unity Catalog
  • Scrum
  • Kanban
  • Jira
  • Trello
  • Communication
  • Negotiation
  • Conflict resolution
  • Collaboration
  • Databricks SQL
  • Databricks Workflows
  • Apache Spark
  • PySpark
  • Git
  • GitLab
  • CI/CD
  • NoSQL
  • MongoDB
  • Google Analytics
  • Dimensional modeling

🎁 Benefits & Perks

  • Opportunity to learn and develop career
  • Encouragement to be courageous and experiment daily
  • Continuous trust and support in a thriving environment
  • Growth-minded culture with openness to feedback and learning
  • Opportunity to provide feedback and collaborate with colleagues
  • Equal-opportunity employment
  • Eligibility for people with disabilities or rehabilitated

Locations

  • NO City, BR, India (Remote)

Salary

Estimated Salary Rangemedium confidence

2,500,000 - 4,200,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Databricksintermediate
  • Delta Lakeintermediate
  • Unity Catalogintermediate
  • Scrumintermediate
  • Kanbanintermediate
  • Jiraintermediate
  • Trellointermediate
  • Communicationintermediate
  • Negotiationintermediate
  • Conflict resolutionintermediate
  • Collaborationintermediate
  • Databricks SQLintermediate
  • Databricks Workflowsintermediate
  • Apache Sparkintermediate
  • PySparkintermediate
  • Gitintermediate
  • GitLabintermediate
  • CI/CDintermediate
  • NoSQLintermediate
  • MongoDBintermediate
  • Google Analyticsintermediate
  • Dimensional modelingintermediate

Required Qualifications

  • Solid experience in data engineering (experience)
  • Proficiency in Python (experience)
  • Knowledge of relational databases (experience)
  • SQL query language proficiency (experience)
  • Big Data and Distributed Processing with Apache Spark and PySpark (experience)
  • Practical knowledge of Git/GitLab for code versioning, collaboration, and CI/CD workflows (experience)
  • Practical knowledge in Databricks (experience)

Preferred Qualifications

  • Understanding of NoSQL databases with practical experience in MongoDB (experience)
  • Knowledge of Google Analytics and other data analysis tools for data extraction and processing (experience)
  • Knowledge of dimensional modeling (facts and dimensions) (experience)

Responsibilities

  • Work in agile teams using frameworks like Scrum and Kanban
  • Use agile project management tools such as Jira, Trello or similar
  • Communicate clearly and objectively, with negotiation and conflict resolution skills
  • Collaborate actively with technical and business stakeholders
  • Create and optimize Databricks notebooks for data manipulation and analysis
  • Use Databricks SQL for efficient queries and report building
  • Configure and execute Databricks Workflows for process automation
  • Manipulate data with Unity Catalog ensuring governance and security
  • Utilize Delta Lake for building reliable data lakes with versioning and quality control
  • Implement scalable and resilient data pipelines
  • Integrate with data monitoring, storage, and visualization tools
  • Analyze critically and solve complex problems
  • Focus on continuous improvement, operational efficiency, and value delivery
  • Work in environments with large data volumes and high performance demands

Benefits

  • general: Opportunity to learn and develop career
  • general: Encouragement to be courageous and experiment daily
  • general: Continuous trust and support in a thriving environment
  • general: Growth-minded culture with openness to feedback and learning
  • general: Opportunity to provide feedback and collaborate with colleagues
  • general: Equal-opportunity employment
  • general: Eligibility for people with disabilities or rehabilitated

Target Your Resume for "Data Engineer - BI" , IBM

Get personalized recommendations to optimize your resume specifically for Data Engineer - BI. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer - BI" , IBM

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Data & AnalyticsData & Analytics

Answer 10 quick questions to check your fit for Data Engineer - BI @ IBM.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.