Resume and JobRESUME AND JOB
JP Morgan Chase logo

Lead Software Engineer - AWS/PySpark/Data Engineer/Databricks

JP Morgan Chase

Software and Technology Jobs

Lead Software Engineer - AWS/PySpark/Data Engineer/Databricks

full-timePosted: Oct 7, 2025

Job Description

Lead Software Engineer - AWS/PySpark/Data Engineer/Databricks

Location: OH, United States

Job Family: Software Engineering

About the Role

At JP Morgan Chase, we are at the forefront of financial innovation, leveraging cutting-edge technology to power global banking, investment, and asset management services. As a Lead Software Engineer specializing in AWS, PySpark, Data Engineering, and Databricks, you will play a pivotal role in our agile teams, developing robust tech solutions that handle petabyte-scale financial data. Based in Columbus, OH, you will contribute to mission-critical projects in risk modeling, transaction processing, and regulatory reporting, ensuring our systems deliver actionable insights to support client decisions and maintain market leadership. Your expertise will drive the transformation of raw financial datasets into secure, scalable analytics platforms that comply with stringent industry standards. In this leadership position within our Software Engineering category, you will architect and optimize data pipelines using PySpark for real-time processing of trading volumes and customer behaviors, while harnessing Databricks for collaborative environments that accelerate model deployment. You will integrate AWS services to build resilient cloud-native infrastructures, automating workflows that enhance operational efficiency and reduce latency in high-frequency trading scenarios. Collaborating with data scientists, analysts, and business stakeholders, you will lead initiatives to ingest diverse data sources—from market feeds to internal ledgers—ensuring data quality and governance align with JP Morgan Chase's commitment to ethical AI and data privacy. Joining JP Morgan Chase means being part of a dynamic culture that values innovation, inclusivity, and professional growth. You will mentor team members, foster agile best practices, and innovate on solutions that address evolving challenges in the financial services landscape, such as cybersecurity threats and sustainable investing analytics. With opportunities to influence global strategies and access world-class resources, this role offers a chance to advance your career while contributing to the stability and growth of the world's leading financial institution.

Key Responsibilities

  • Design, develop, and maintain scalable data pipelines using PySpark and Databricks to process large volumes of financial transaction data
  • Collaborate with cross-functional agile teams to deliver critical tech solutions for risk analytics, fraud detection, and portfolio management
  • Leverage AWS cloud infrastructure to optimize data storage, processing, and analytics for high-stakes financial applications
  • Implement data ingestion, transformation, and loading (ETL) processes to ensure data integrity and availability for downstream systems
  • Troubleshoot and resolve complex data engineering issues, minimizing downtime in 24/7 financial operations
  • Conduct code reviews and mentor junior engineers on best practices for secure and efficient data handling
  • Integrate machine learning models into data workflows to enhance predictive analytics for market trends
  • Ensure all solutions comply with JP Morgan Chase's security protocols and financial regulations
  • Monitor and optimize performance of data systems to handle increasing volumes of real-time financial data
  • Contribute to innovation initiatives, exploring emerging technologies to drive efficiency in Chase's data ecosystem

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred
  • 5+ years of professional software engineering experience, with a focus on data engineering and cloud technologies
  • Proficiency in AWS services including EC2, S3, Lambda, and EMR
  • Strong experience with PySpark for large-scale data processing and ETL pipelines
  • Hands-on expertise in Databricks for collaborative data analytics and machine learning workflows
  • Demonstrated ability to work in agile environments, delivering high-quality code in a fast-paced setting
  • Experience with financial data handling, ensuring compliance with regulatory standards like GDPR and SEC requirements

Preferred Qualifications

  • Master's degree in Data Science or related field
  • Certification in AWS (e.g., AWS Certified Solutions Architect) or Databricks
  • Prior experience in the financial services industry, particularly with risk management or trading systems
  • Knowledge of Python libraries such as Pandas, NumPy, and Scikit-learn for data manipulation
  • Familiarity with CI/CD pipelines using tools like Jenkins or GitHub Actions

Required Skills

  • PySpark for distributed data processing
  • AWS cloud services (S3, EMR, Glue)
  • Databricks platform for data engineering
  • Python programming and scripting
  • SQL for database querying and optimization
  • ETL pipeline design and implementation
  • Agile methodologies and Scrum practices
  • Data modeling and schema design
  • Version control with Git
  • Problem-solving in high-pressure environments
  • Communication and collaboration in team settings
  • Knowledge of financial regulations and data security
  • Big data technologies like Hadoop or Spark
  • Machine learning integration with data pipelines
  • Performance tuning for large-scale systems

Benefits

  • Competitive base salary and performance-based annual bonuses
  • Comprehensive health, dental, and vision insurance plans
  • 401(k) retirement savings plan with generous company matching
  • Paid time off including vacation, sick days, and parental leave
  • Professional development opportunities with tuition reimbursement and access to internal training programs
  • Employee stock purchase plan and financial wellness resources
  • Flexible work arrangements, including hybrid options in Columbus, OH
  • Wellness programs with gym memberships, mental health support, and onsite fitness facilities

JP Morgan Chase is an equal opportunity employer.

Locations

  • OH, US

Salary

Estimated Salary Rangehigh confidence

180,000 - 250,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • PySpark for distributed data processingintermediate
  • AWS cloud services (S3, EMR, Glue)intermediate
  • Databricks platform for data engineeringintermediate
  • Python programming and scriptingintermediate
  • SQL for database querying and optimizationintermediate
  • ETL pipeline design and implementationintermediate
  • Agile methodologies and Scrum practicesintermediate
  • Data modeling and schema designintermediate
  • Version control with Gitintermediate
  • Problem-solving in high-pressure environmentsintermediate
  • Communication and collaboration in team settingsintermediate
  • Knowledge of financial regulations and data securityintermediate
  • Big data technologies like Hadoop or Sparkintermediate
  • Machine learning integration with data pipelinesintermediate
  • Performance tuning for large-scale systemsintermediate

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred (experience)
  • 5+ years of professional software engineering experience, with a focus on data engineering and cloud technologies (experience)
  • Proficiency in AWS services including EC2, S3, Lambda, and EMR (experience)
  • Strong experience with PySpark for large-scale data processing and ETL pipelines (experience)
  • Hands-on expertise in Databricks for collaborative data analytics and machine learning workflows (experience)
  • Demonstrated ability to work in agile environments, delivering high-quality code in a fast-paced setting (experience)
  • Experience with financial data handling, ensuring compliance with regulatory standards like GDPR and SEC requirements (experience)

Preferred Qualifications

  • Master's degree in Data Science or related field (experience)
  • Certification in AWS (e.g., AWS Certified Solutions Architect) or Databricks (experience)
  • Prior experience in the financial services industry, particularly with risk management or trading systems (experience)
  • Knowledge of Python libraries such as Pandas, NumPy, and Scikit-learn for data manipulation (experience)
  • Familiarity with CI/CD pipelines using tools like Jenkins or GitHub Actions (experience)

Responsibilities

  • Design, develop, and maintain scalable data pipelines using PySpark and Databricks to process large volumes of financial transaction data
  • Collaborate with cross-functional agile teams to deliver critical tech solutions for risk analytics, fraud detection, and portfolio management
  • Leverage AWS cloud infrastructure to optimize data storage, processing, and analytics for high-stakes financial applications
  • Implement data ingestion, transformation, and loading (ETL) processes to ensure data integrity and availability for downstream systems
  • Troubleshoot and resolve complex data engineering issues, minimizing downtime in 24/7 financial operations
  • Conduct code reviews and mentor junior engineers on best practices for secure and efficient data handling
  • Integrate machine learning models into data workflows to enhance predictive analytics for market trends
  • Ensure all solutions comply with JP Morgan Chase's security protocols and financial regulations
  • Monitor and optimize performance of data systems to handle increasing volumes of real-time financial data
  • Contribute to innovation initiatives, exploring emerging technologies to drive efficiency in Chase's data ecosystem

Benefits

  • general: Competitive base salary and performance-based annual bonuses
  • general: Comprehensive health, dental, and vision insurance plans
  • general: 401(k) retirement savings plan with generous company matching
  • general: Paid time off including vacation, sick days, and parental leave
  • general: Professional development opportunities with tuition reimbursement and access to internal training programs
  • general: Employee stock purchase plan and financial wellness resources
  • general: Flexible work arrangements, including hybrid options in Columbus, OH
  • general: Wellness programs with gym memberships, mental health support, and onsite fitness facilities

Target Your Resume for "Lead Software Engineer - AWS/PySpark/Data Engineer/Databricks" , JP Morgan Chase

Get personalized recommendations to optimize your resume specifically for Lead Software Engineer - AWS/PySpark/Data Engineer/Databricks. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Lead Software Engineer - AWS/PySpark/Data Engineer/Databricks" , JP Morgan Chase

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringFinancial ServicesBankingJP MorganSoftware Engineering

Answer 10 quick questions to check your fit for Lead Software Engineer - AWS/PySpark/Data Engineer/Databricks @ JP Morgan Chase.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

JP Morgan Chase logo

Lead Software Engineer - AWS/PySpark/Data Engineer/Databricks

JP Morgan Chase

Software and Technology Jobs

Lead Software Engineer - AWS/PySpark/Data Engineer/Databricks

full-timePosted: Oct 7, 2025

Job Description

Lead Software Engineer - AWS/PySpark/Data Engineer/Databricks

Location: OH, United States

Job Family: Software Engineering

About the Role

At JP Morgan Chase, we are at the forefront of financial innovation, leveraging cutting-edge technology to power global banking, investment, and asset management services. As a Lead Software Engineer specializing in AWS, PySpark, Data Engineering, and Databricks, you will play a pivotal role in our agile teams, developing robust tech solutions that handle petabyte-scale financial data. Based in Columbus, OH, you will contribute to mission-critical projects in risk modeling, transaction processing, and regulatory reporting, ensuring our systems deliver actionable insights to support client decisions and maintain market leadership. Your expertise will drive the transformation of raw financial datasets into secure, scalable analytics platforms that comply with stringent industry standards. In this leadership position within our Software Engineering category, you will architect and optimize data pipelines using PySpark for real-time processing of trading volumes and customer behaviors, while harnessing Databricks for collaborative environments that accelerate model deployment. You will integrate AWS services to build resilient cloud-native infrastructures, automating workflows that enhance operational efficiency and reduce latency in high-frequency trading scenarios. Collaborating with data scientists, analysts, and business stakeholders, you will lead initiatives to ingest diverse data sources—from market feeds to internal ledgers—ensuring data quality and governance align with JP Morgan Chase's commitment to ethical AI and data privacy. Joining JP Morgan Chase means being part of a dynamic culture that values innovation, inclusivity, and professional growth. You will mentor team members, foster agile best practices, and innovate on solutions that address evolving challenges in the financial services landscape, such as cybersecurity threats and sustainable investing analytics. With opportunities to influence global strategies and access world-class resources, this role offers a chance to advance your career while contributing to the stability and growth of the world's leading financial institution.

Key Responsibilities

  • Design, develop, and maintain scalable data pipelines using PySpark and Databricks to process large volumes of financial transaction data
  • Collaborate with cross-functional agile teams to deliver critical tech solutions for risk analytics, fraud detection, and portfolio management
  • Leverage AWS cloud infrastructure to optimize data storage, processing, and analytics for high-stakes financial applications
  • Implement data ingestion, transformation, and loading (ETL) processes to ensure data integrity and availability for downstream systems
  • Troubleshoot and resolve complex data engineering issues, minimizing downtime in 24/7 financial operations
  • Conduct code reviews and mentor junior engineers on best practices for secure and efficient data handling
  • Integrate machine learning models into data workflows to enhance predictive analytics for market trends
  • Ensure all solutions comply with JP Morgan Chase's security protocols and financial regulations
  • Monitor and optimize performance of data systems to handle increasing volumes of real-time financial data
  • Contribute to innovation initiatives, exploring emerging technologies to drive efficiency in Chase's data ecosystem

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred
  • 5+ years of professional software engineering experience, with a focus on data engineering and cloud technologies
  • Proficiency in AWS services including EC2, S3, Lambda, and EMR
  • Strong experience with PySpark for large-scale data processing and ETL pipelines
  • Hands-on expertise in Databricks for collaborative data analytics and machine learning workflows
  • Demonstrated ability to work in agile environments, delivering high-quality code in a fast-paced setting
  • Experience with financial data handling, ensuring compliance with regulatory standards like GDPR and SEC requirements

Preferred Qualifications

  • Master's degree in Data Science or related field
  • Certification in AWS (e.g., AWS Certified Solutions Architect) or Databricks
  • Prior experience in the financial services industry, particularly with risk management or trading systems
  • Knowledge of Python libraries such as Pandas, NumPy, and Scikit-learn for data manipulation
  • Familiarity with CI/CD pipelines using tools like Jenkins or GitHub Actions

Required Skills

  • PySpark for distributed data processing
  • AWS cloud services (S3, EMR, Glue)
  • Databricks platform for data engineering
  • Python programming and scripting
  • SQL for database querying and optimization
  • ETL pipeline design and implementation
  • Agile methodologies and Scrum practices
  • Data modeling and schema design
  • Version control with Git
  • Problem-solving in high-pressure environments
  • Communication and collaboration in team settings
  • Knowledge of financial regulations and data security
  • Big data technologies like Hadoop or Spark
  • Machine learning integration with data pipelines
  • Performance tuning for large-scale systems

Benefits

  • Competitive base salary and performance-based annual bonuses
  • Comprehensive health, dental, and vision insurance plans
  • 401(k) retirement savings plan with generous company matching
  • Paid time off including vacation, sick days, and parental leave
  • Professional development opportunities with tuition reimbursement and access to internal training programs
  • Employee stock purchase plan and financial wellness resources
  • Flexible work arrangements, including hybrid options in Columbus, OH
  • Wellness programs with gym memberships, mental health support, and onsite fitness facilities

JP Morgan Chase is an equal opportunity employer.

Locations

  • OH, US

Salary

Estimated Salary Rangehigh confidence

180,000 - 250,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • PySpark for distributed data processingintermediate
  • AWS cloud services (S3, EMR, Glue)intermediate
  • Databricks platform for data engineeringintermediate
  • Python programming and scriptingintermediate
  • SQL for database querying and optimizationintermediate
  • ETL pipeline design and implementationintermediate
  • Agile methodologies and Scrum practicesintermediate
  • Data modeling and schema designintermediate
  • Version control with Gitintermediate
  • Problem-solving in high-pressure environmentsintermediate
  • Communication and collaboration in team settingsintermediate
  • Knowledge of financial regulations and data securityintermediate
  • Big data technologies like Hadoop or Sparkintermediate
  • Machine learning integration with data pipelinesintermediate
  • Performance tuning for large-scale systemsintermediate

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred (experience)
  • 5+ years of professional software engineering experience, with a focus on data engineering and cloud technologies (experience)
  • Proficiency in AWS services including EC2, S3, Lambda, and EMR (experience)
  • Strong experience with PySpark for large-scale data processing and ETL pipelines (experience)
  • Hands-on expertise in Databricks for collaborative data analytics and machine learning workflows (experience)
  • Demonstrated ability to work in agile environments, delivering high-quality code in a fast-paced setting (experience)
  • Experience with financial data handling, ensuring compliance with regulatory standards like GDPR and SEC requirements (experience)

Preferred Qualifications

  • Master's degree in Data Science or related field (experience)
  • Certification in AWS (e.g., AWS Certified Solutions Architect) or Databricks (experience)
  • Prior experience in the financial services industry, particularly with risk management or trading systems (experience)
  • Knowledge of Python libraries such as Pandas, NumPy, and Scikit-learn for data manipulation (experience)
  • Familiarity with CI/CD pipelines using tools like Jenkins or GitHub Actions (experience)

Responsibilities

  • Design, develop, and maintain scalable data pipelines using PySpark and Databricks to process large volumes of financial transaction data
  • Collaborate with cross-functional agile teams to deliver critical tech solutions for risk analytics, fraud detection, and portfolio management
  • Leverage AWS cloud infrastructure to optimize data storage, processing, and analytics for high-stakes financial applications
  • Implement data ingestion, transformation, and loading (ETL) processes to ensure data integrity and availability for downstream systems
  • Troubleshoot and resolve complex data engineering issues, minimizing downtime in 24/7 financial operations
  • Conduct code reviews and mentor junior engineers on best practices for secure and efficient data handling
  • Integrate machine learning models into data workflows to enhance predictive analytics for market trends
  • Ensure all solutions comply with JP Morgan Chase's security protocols and financial regulations
  • Monitor and optimize performance of data systems to handle increasing volumes of real-time financial data
  • Contribute to innovation initiatives, exploring emerging technologies to drive efficiency in Chase's data ecosystem

Benefits

  • general: Competitive base salary and performance-based annual bonuses
  • general: Comprehensive health, dental, and vision insurance plans
  • general: 401(k) retirement savings plan with generous company matching
  • general: Paid time off including vacation, sick days, and parental leave
  • general: Professional development opportunities with tuition reimbursement and access to internal training programs
  • general: Employee stock purchase plan and financial wellness resources
  • general: Flexible work arrangements, including hybrid options in Columbus, OH
  • general: Wellness programs with gym memberships, mental health support, and onsite fitness facilities

Target Your Resume for "Lead Software Engineer - AWS/PySpark/Data Engineer/Databricks" , JP Morgan Chase

Get personalized recommendations to optimize your resume specifically for Lead Software Engineer - AWS/PySpark/Data Engineer/Databricks. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Lead Software Engineer - AWS/PySpark/Data Engineer/Databricks" , JP Morgan Chase

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringFinancial ServicesBankingJP MorganSoftware Engineering

Answer 10 quick questions to check your fit for Lead Software Engineer - AWS/PySpark/Data Engineer/Databricks @ JP Morgan Chase.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.