Resume and JobRESUME AND JOB
JP Morgan Chase logo

Lead Data Engineer - Databricks

JP Morgan Chase

Software and Technology Jobs

Lead Data Engineer - Databricks

full-timePosted: Dec 9, 2025

Job Description

Lead Data Engineer - Databricks

Location: Columbus, OH, United States

Job Family: Data Engineering

About the Role

At JP Morgan Chase, we are at the forefront of financial innovation, leveraging cutting-edge data technologies to drive decisions that power global banking, investment, and asset management. As a Lead Data Engineer - Databricks in our Columbus, OH team, you will play a pivotal role in maintaining and evolving critical data pipelines and architectures. This position is integral to our agile squads, where you'll collaborate with data scientists, analysts, and business stakeholders to ensure our financial data ecosystems are reliable, scalable, and compliant with industry regulations. Your expertise will directly impact how we handle vast datasets for risk assessment, customer insights, and operational efficiency in one of the world's largest financial institutions. In this leadership role, you will architect solutions using Databricks to process terabytes of transactional and market data, optimizing for performance in a cloud-native environment. You'll lead the design of ETL processes that integrate disparate data sources, ensuring high-quality outputs for downstream applications like fraud detection and portfolio analytics. Working within JP Morgan Chase's commitment to innovation, you'll incorporate best practices in data governance, security, and automation, while navigating the complexities of financial regulations such as SOX and GDPR. This hands-on position requires balancing technical depth with strategic oversight, mentoring team members, and contributing to the firm's data strategy. We value engineers who thrive in dynamic, collaborative settings and are passionate about the intersection of data engineering and finance. Joining JP Morgan Chase means access to unparalleled resources, including advanced tools, global networks, and opportunities for career growth. If you have a proven track record in Databricks and a keen eye for scalable architectures, this role offers the chance to shape the future of data at a premier financial services leader.

Key Responsibilities

  • Design, develop, and maintain robust data pipelines using Databricks to support critical financial operations and analytics
  • Collaborate with agile teams to integrate data architectures across multiple technical domains, ensuring seamless data flow for banking applications
  • Optimize data processing workflows for performance, scalability, and cost-efficiency in a high-volume financial environment
  • Implement data governance and quality controls to comply with JP Morgan Chase's regulatory and security standards
  • Troubleshoot and resolve complex data issues, minimizing downtime for mission-critical financial systems
  • Mentor junior engineers and contribute to best practices in data engineering within the firm
  • Integrate machine learning models into data pipelines to enhance predictive analytics for risk management and fraud detection
  • Monitor and analyze data usage patterns to recommend improvements in architecture for JP Morgan's global data ecosystem
  • Ensure data privacy and security measures are embedded in all pipelines, adhering to financial industry regulations

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred
  • 5+ years of experience in data engineering, with a focus on building and maintaining scalable data pipelines
  • Proficiency in Databricks, Spark, and cloud-based data platforms such as AWS or Azure
  • Strong experience with ETL processes, data modeling, and ensuring data quality in financial datasets
  • Demonstrated ability to work in agile environments, collaborating with cross-functional teams in a fast-paced financial services setting
  • Knowledge of regulatory compliance standards like GDPR, SOX, and financial data security protocols
  • Experience with version control systems (e.g., Git) and CI/CD pipelines

Preferred Qualifications

  • Experience in the financial services industry, particularly with banking or investment data systems
  • Certification in Databricks or Apache Spark
  • Familiarity with machine learning pipelines and big data technologies like Hadoop or Kafka
  • Prior leadership experience mentoring junior data engineers
  • Advanced SQL and Python programming skills applied to financial analytics

Required Skills

  • Expertise in Databricks and Apache Spark for large-scale data processing
  • Proficiency in Python, Scala, or Java for data pipeline development
  • Advanced SQL querying and optimization for financial databases
  • Experience with ETL tools like Apache Airflow or Talend
  • Knowledge of cloud platforms (AWS, Azure) and containerization (Docker, Kubernetes)
  • Strong problem-solving and analytical skills in data troubleshooting
  • Agile methodologies and collaboration tools (Jira, Confluence)
  • Understanding of data security and encryption in financial contexts
  • Machine learning frameworks (e.g., MLflow in Databricks)
  • Communication skills for cross-team stakeholder engagement
  • Version control with Git and CI/CD practices
  • Big data technologies like Kafka for real-time streaming
  • Regulatory knowledge in finance (e.g., data lineage for audits)
  • Performance tuning for cost-effective data architectures

Benefits

  • Comprehensive health, dental, and vision insurance plans
  • 401(k) retirement savings plan with company matching contributions
  • Generous paid time off, including vacation, sick days, and parental leave
  • Professional development programs, including tuition reimbursement and access to internal training at JP Morgan Chase University
  • Employee stock purchase plan and performance-based bonuses
  • Wellness programs with gym memberships and mental health support
  • Flexible work arrangements, including hybrid options in Columbus, OH
  • On-site amenities and commuter benefits for a balanced work-life integration

JP Morgan Chase is an equal opportunity employer.

Locations

  • Columbus, US

Salary

Estimated Salary Rangehigh confidence

180,000 - 250,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Expertise in Databricks and Apache Spark for large-scale data processingintermediate
  • Proficiency in Python, Scala, or Java for data pipeline developmentintermediate
  • Advanced SQL querying and optimization for financial databasesintermediate
  • Experience with ETL tools like Apache Airflow or Talendintermediate
  • Knowledge of cloud platforms (AWS, Azure) and containerization (Docker, Kubernetes)intermediate
  • Strong problem-solving and analytical skills in data troubleshootingintermediate
  • Agile methodologies and collaboration tools (Jira, Confluence)intermediate
  • Understanding of data security and encryption in financial contextsintermediate
  • Machine learning frameworks (e.g., MLflow in Databricks)intermediate
  • Communication skills for cross-team stakeholder engagementintermediate
  • Version control with Git and CI/CD practicesintermediate
  • Big data technologies like Kafka for real-time streamingintermediate
  • Regulatory knowledge in finance (e.g., data lineage for audits)intermediate
  • Performance tuning for cost-effective data architecturesintermediate

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred (experience)
  • 5+ years of experience in data engineering, with a focus on building and maintaining scalable data pipelines (experience)
  • Proficiency in Databricks, Spark, and cloud-based data platforms such as AWS or Azure (experience)
  • Strong experience with ETL processes, data modeling, and ensuring data quality in financial datasets (experience)
  • Demonstrated ability to work in agile environments, collaborating with cross-functional teams in a fast-paced financial services setting (experience)
  • Knowledge of regulatory compliance standards like GDPR, SOX, and financial data security protocols (experience)
  • Experience with version control systems (e.g., Git) and CI/CD pipelines (experience)

Preferred Qualifications

  • Experience in the financial services industry, particularly with banking or investment data systems (experience)
  • Certification in Databricks or Apache Spark (experience)
  • Familiarity with machine learning pipelines and big data technologies like Hadoop or Kafka (experience)
  • Prior leadership experience mentoring junior data engineers (experience)
  • Advanced SQL and Python programming skills applied to financial analytics (experience)

Responsibilities

  • Design, develop, and maintain robust data pipelines using Databricks to support critical financial operations and analytics
  • Collaborate with agile teams to integrate data architectures across multiple technical domains, ensuring seamless data flow for banking applications
  • Optimize data processing workflows for performance, scalability, and cost-efficiency in a high-volume financial environment
  • Implement data governance and quality controls to comply with JP Morgan Chase's regulatory and security standards
  • Troubleshoot and resolve complex data issues, minimizing downtime for mission-critical financial systems
  • Mentor junior engineers and contribute to best practices in data engineering within the firm
  • Integrate machine learning models into data pipelines to enhance predictive analytics for risk management and fraud detection
  • Monitor and analyze data usage patterns to recommend improvements in architecture for JP Morgan's global data ecosystem
  • Ensure data privacy and security measures are embedded in all pipelines, adhering to financial industry regulations

Benefits

  • general: Comprehensive health, dental, and vision insurance plans
  • general: 401(k) retirement savings plan with company matching contributions
  • general: Generous paid time off, including vacation, sick days, and parental leave
  • general: Professional development programs, including tuition reimbursement and access to internal training at JP Morgan Chase University
  • general: Employee stock purchase plan and performance-based bonuses
  • general: Wellness programs with gym memberships and mental health support
  • general: Flexible work arrangements, including hybrid options in Columbus, OH
  • general: On-site amenities and commuter benefits for a balanced work-life integration

Target Your Resume for "Lead Data Engineer - Databricks" , JP Morgan Chase

Get personalized recommendations to optimize your resume specifically for Lead Data Engineer - Databricks. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Lead Data Engineer - Databricks" , JP Morgan Chase

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Data EngineeringFinancial ServicesBankingJP MorganData Engineering

Answer 10 quick questions to check your fit for Lead Data Engineer - Databricks @ JP Morgan Chase.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

JP Morgan Chase logo

Lead Data Engineer - Databricks

JP Morgan Chase

Software and Technology Jobs

Lead Data Engineer - Databricks

full-timePosted: Dec 9, 2025

Job Description

Lead Data Engineer - Databricks

Location: Columbus, OH, United States

Job Family: Data Engineering

About the Role

At JP Morgan Chase, we are at the forefront of financial innovation, leveraging cutting-edge data technologies to drive decisions that power global banking, investment, and asset management. As a Lead Data Engineer - Databricks in our Columbus, OH team, you will play a pivotal role in maintaining and evolving critical data pipelines and architectures. This position is integral to our agile squads, where you'll collaborate with data scientists, analysts, and business stakeholders to ensure our financial data ecosystems are reliable, scalable, and compliant with industry regulations. Your expertise will directly impact how we handle vast datasets for risk assessment, customer insights, and operational efficiency in one of the world's largest financial institutions. In this leadership role, you will architect solutions using Databricks to process terabytes of transactional and market data, optimizing for performance in a cloud-native environment. You'll lead the design of ETL processes that integrate disparate data sources, ensuring high-quality outputs for downstream applications like fraud detection and portfolio analytics. Working within JP Morgan Chase's commitment to innovation, you'll incorporate best practices in data governance, security, and automation, while navigating the complexities of financial regulations such as SOX and GDPR. This hands-on position requires balancing technical depth with strategic oversight, mentoring team members, and contributing to the firm's data strategy. We value engineers who thrive in dynamic, collaborative settings and are passionate about the intersection of data engineering and finance. Joining JP Morgan Chase means access to unparalleled resources, including advanced tools, global networks, and opportunities for career growth. If you have a proven track record in Databricks and a keen eye for scalable architectures, this role offers the chance to shape the future of data at a premier financial services leader.

Key Responsibilities

  • Design, develop, and maintain robust data pipelines using Databricks to support critical financial operations and analytics
  • Collaborate with agile teams to integrate data architectures across multiple technical domains, ensuring seamless data flow for banking applications
  • Optimize data processing workflows for performance, scalability, and cost-efficiency in a high-volume financial environment
  • Implement data governance and quality controls to comply with JP Morgan Chase's regulatory and security standards
  • Troubleshoot and resolve complex data issues, minimizing downtime for mission-critical financial systems
  • Mentor junior engineers and contribute to best practices in data engineering within the firm
  • Integrate machine learning models into data pipelines to enhance predictive analytics for risk management and fraud detection
  • Monitor and analyze data usage patterns to recommend improvements in architecture for JP Morgan's global data ecosystem
  • Ensure data privacy and security measures are embedded in all pipelines, adhering to financial industry regulations

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred
  • 5+ years of experience in data engineering, with a focus on building and maintaining scalable data pipelines
  • Proficiency in Databricks, Spark, and cloud-based data platforms such as AWS or Azure
  • Strong experience with ETL processes, data modeling, and ensuring data quality in financial datasets
  • Demonstrated ability to work in agile environments, collaborating with cross-functional teams in a fast-paced financial services setting
  • Knowledge of regulatory compliance standards like GDPR, SOX, and financial data security protocols
  • Experience with version control systems (e.g., Git) and CI/CD pipelines

Preferred Qualifications

  • Experience in the financial services industry, particularly with banking or investment data systems
  • Certification in Databricks or Apache Spark
  • Familiarity with machine learning pipelines and big data technologies like Hadoop or Kafka
  • Prior leadership experience mentoring junior data engineers
  • Advanced SQL and Python programming skills applied to financial analytics

Required Skills

  • Expertise in Databricks and Apache Spark for large-scale data processing
  • Proficiency in Python, Scala, or Java for data pipeline development
  • Advanced SQL querying and optimization for financial databases
  • Experience with ETL tools like Apache Airflow or Talend
  • Knowledge of cloud platforms (AWS, Azure) and containerization (Docker, Kubernetes)
  • Strong problem-solving and analytical skills in data troubleshooting
  • Agile methodologies and collaboration tools (Jira, Confluence)
  • Understanding of data security and encryption in financial contexts
  • Machine learning frameworks (e.g., MLflow in Databricks)
  • Communication skills for cross-team stakeholder engagement
  • Version control with Git and CI/CD practices
  • Big data technologies like Kafka for real-time streaming
  • Regulatory knowledge in finance (e.g., data lineage for audits)
  • Performance tuning for cost-effective data architectures

Benefits

  • Comprehensive health, dental, and vision insurance plans
  • 401(k) retirement savings plan with company matching contributions
  • Generous paid time off, including vacation, sick days, and parental leave
  • Professional development programs, including tuition reimbursement and access to internal training at JP Morgan Chase University
  • Employee stock purchase plan and performance-based bonuses
  • Wellness programs with gym memberships and mental health support
  • Flexible work arrangements, including hybrid options in Columbus, OH
  • On-site amenities and commuter benefits for a balanced work-life integration

JP Morgan Chase is an equal opportunity employer.

Locations

  • Columbus, US

Salary

Estimated Salary Rangehigh confidence

180,000 - 250,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Expertise in Databricks and Apache Spark for large-scale data processingintermediate
  • Proficiency in Python, Scala, or Java for data pipeline developmentintermediate
  • Advanced SQL querying and optimization for financial databasesintermediate
  • Experience with ETL tools like Apache Airflow or Talendintermediate
  • Knowledge of cloud platforms (AWS, Azure) and containerization (Docker, Kubernetes)intermediate
  • Strong problem-solving and analytical skills in data troubleshootingintermediate
  • Agile methodologies and collaboration tools (Jira, Confluence)intermediate
  • Understanding of data security and encryption in financial contextsintermediate
  • Machine learning frameworks (e.g., MLflow in Databricks)intermediate
  • Communication skills for cross-team stakeholder engagementintermediate
  • Version control with Git and CI/CD practicesintermediate
  • Big data technologies like Kafka for real-time streamingintermediate
  • Regulatory knowledge in finance (e.g., data lineage for audits)intermediate
  • Performance tuning for cost-effective data architecturesintermediate

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred (experience)
  • 5+ years of experience in data engineering, with a focus on building and maintaining scalable data pipelines (experience)
  • Proficiency in Databricks, Spark, and cloud-based data platforms such as AWS or Azure (experience)
  • Strong experience with ETL processes, data modeling, and ensuring data quality in financial datasets (experience)
  • Demonstrated ability to work in agile environments, collaborating with cross-functional teams in a fast-paced financial services setting (experience)
  • Knowledge of regulatory compliance standards like GDPR, SOX, and financial data security protocols (experience)
  • Experience with version control systems (e.g., Git) and CI/CD pipelines (experience)

Preferred Qualifications

  • Experience in the financial services industry, particularly with banking or investment data systems (experience)
  • Certification in Databricks or Apache Spark (experience)
  • Familiarity with machine learning pipelines and big data technologies like Hadoop or Kafka (experience)
  • Prior leadership experience mentoring junior data engineers (experience)
  • Advanced SQL and Python programming skills applied to financial analytics (experience)

Responsibilities

  • Design, develop, and maintain robust data pipelines using Databricks to support critical financial operations and analytics
  • Collaborate with agile teams to integrate data architectures across multiple technical domains, ensuring seamless data flow for banking applications
  • Optimize data processing workflows for performance, scalability, and cost-efficiency in a high-volume financial environment
  • Implement data governance and quality controls to comply with JP Morgan Chase's regulatory and security standards
  • Troubleshoot and resolve complex data issues, minimizing downtime for mission-critical financial systems
  • Mentor junior engineers and contribute to best practices in data engineering within the firm
  • Integrate machine learning models into data pipelines to enhance predictive analytics for risk management and fraud detection
  • Monitor and analyze data usage patterns to recommend improvements in architecture for JP Morgan's global data ecosystem
  • Ensure data privacy and security measures are embedded in all pipelines, adhering to financial industry regulations

Benefits

  • general: Comprehensive health, dental, and vision insurance plans
  • general: 401(k) retirement savings plan with company matching contributions
  • general: Generous paid time off, including vacation, sick days, and parental leave
  • general: Professional development programs, including tuition reimbursement and access to internal training at JP Morgan Chase University
  • general: Employee stock purchase plan and performance-based bonuses
  • general: Wellness programs with gym memberships and mental health support
  • general: Flexible work arrangements, including hybrid options in Columbus, OH
  • general: On-site amenities and commuter benefits for a balanced work-life integration

Target Your Resume for "Lead Data Engineer - Databricks" , JP Morgan Chase

Get personalized recommendations to optimize your resume specifically for Lead Data Engineer - Databricks. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Lead Data Engineer - Databricks" , JP Morgan Chase

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Data EngineeringFinancial ServicesBankingJP MorganData Engineering

Answer 10 quick questions to check your fit for Lead Data Engineer - Databricks @ JP Morgan Chase.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.