Resume and JobRESUME AND JOB
JP Morgan Chase logo

Data Engineer III - PySpark/AWS

JP Morgan Chase

Software and Technology Jobs

Data Engineer III - PySpark/AWS

full-timePosted: Nov 25, 2025

Job Description

Data Engineer III - PySpark/AWS

Location: Plano, TX, United States

Job Family: Data Engineering

About the Role

At JP Morgan Chase, we are at the forefront of financial innovation, leveraging cutting-edge technology to deliver world-class services to our clients. As a Data Engineer III - PySpark/AWS in our Plano, TX office, you will play a pivotal role in developing, testing, and maintaining critical data pipelines and architectures that power our banking, investment, and asset management operations. This position within the Data Engineering category involves working with massive datasets from global markets, ensuring seamless data flow for risk analysis, fraud detection, and customer insights. You will collaborate with diverse teams to build robust, scalable solutions that drive business decisions in a highly regulated environment. Your day-to-day responsibilities will include designing efficient PySpark-based ETL processes on AWS infrastructure to handle terabytes of transactional and market data daily. You will optimize data storage in services like S3 and Redshift, implement real-time streaming with Kafka, and ensure compliance with financial standards such as Basel III and data privacy laws. By troubleshooting complex issues and automating workflows, you will contribute to reducing latency in our analytics platforms, enabling faster and more accurate financial reporting for stakeholders worldwide. We value engineers who thrive in collaborative, innovative settings and are passionate about the intersection of technology and finance. This role offers opportunities for growth within JP Morgan Chase's dynamic tech ecosystem, where you can influence strategic data initiatives that impact millions of customers. If you have a strong technical foundation and a commitment to excellence, join us in shaping the future of financial services.

Key Responsibilities

  • Design, develop, and optimize scalable data pipelines using PySpark and AWS to process high-volume financial transaction data
  • Collaborate with cross-functional teams including data scientists and analysts to ensure data quality and availability for business intelligence
  • Implement and maintain data architectures that support real-time analytics and reporting for JP Morgan Chase's global operations
  • Test and deploy data solutions in a secure, compliant manner adhering to financial regulations like GDPR and SOX
  • Monitor and troubleshoot data pipelines to minimize downtime and ensure reliability in production environments
  • Integrate data from diverse sources such as market feeds, customer databases, and external APIs into centralized repositories
  • Contribute to data governance initiatives, including metadata management and lineage tracking for audit purposes
  • Mentor junior engineers and participate in code reviews to uphold best practices in data engineering
  • Stay updated on emerging technologies and recommend improvements to enhance data processing efficiency

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred
  • 5+ years of experience in data engineering, with a focus on building and maintaining data pipelines
  • Proficiency in PySpark for large-scale data processing and ETL operations
  • Hands-on experience with AWS services including S3, EMR, Glue, and Lambda
  • Strong understanding of data modeling, warehousing, and integration in financial systems
  • Experience with version control systems like Git and CI/CD pipelines
  • Ability to work in a fast-paced, regulated environment handling sensitive financial data

Preferred Qualifications

  • Experience in the financial services industry, particularly with banking or investment data
  • Familiarity with JP Morgan Chase's internal tools and data governance frameworks
  • Knowledge of machine learning pipelines and big data technologies like Hadoop or Kafka
  • Certifications such as AWS Certified Data Analytics or Databricks Certified Data Engineer

Required Skills

  • PySpark and Spark SQL for distributed data processing
  • AWS cloud services (S3, EMR, Glue, Redshift)
  • Python programming for scripting and automation
  • SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB)
  • ETL tools and data integration frameworks
  • Data modeling and schema design
  • Version control with Git and Agile methodologies
  • Problem-solving and analytical thinking
  • Communication and collaboration in team settings
  • Knowledge of financial data standards and compliance
  • Big data technologies (Hadoop, Kafka)
  • CI/CD pipeline implementation
  • Performance tuning and optimization
  • Attention to detail in handling sensitive data
  • Adaptability to evolving regulatory requirements

Benefits

  • Competitive base salary and performance-based annual bonuses
  • Comprehensive health, dental, and vision insurance plans
  • 401(k) retirement savings plan with company matching contributions
  • Generous paid time off, including vacation, sick leave, and parental leave
  • Professional development opportunities, including tuition reimbursement and access to internal training programs
  • Employee stock purchase plan and financial wellness resources
  • On-site fitness centers and wellness programs at JP Morgan Chase facilities
  • Flexible work arrangements, including hybrid options in Plano, TX

JP Morgan Chase is an equal opportunity employer.

Locations

  • Plano, US

Salary

Estimated Salary Rangehigh confidence

160,000 - 220,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • PySpark and Spark SQL for distributed data processingintermediate
  • AWS cloud services (S3, EMR, Glue, Redshift)intermediate
  • Python programming for scripting and automationintermediate
  • SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB)intermediate
  • ETL tools and data integration frameworksintermediate
  • Data modeling and schema designintermediate
  • Version control with Git and Agile methodologiesintermediate
  • Problem-solving and analytical thinkingintermediate
  • Communication and collaboration in team settingsintermediate
  • Knowledge of financial data standards and complianceintermediate
  • Big data technologies (Hadoop, Kafka)intermediate
  • CI/CD pipeline implementationintermediate
  • Performance tuning and optimizationintermediate
  • Attention to detail in handling sensitive dataintermediate
  • Adaptability to evolving regulatory requirementsintermediate

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred (experience)
  • 5+ years of experience in data engineering, with a focus on building and maintaining data pipelines (experience)
  • Proficiency in PySpark for large-scale data processing and ETL operations (experience)
  • Hands-on experience with AWS services including S3, EMR, Glue, and Lambda (experience)
  • Strong understanding of data modeling, warehousing, and integration in financial systems (experience)
  • Experience with version control systems like Git and CI/CD pipelines (experience)
  • Ability to work in a fast-paced, regulated environment handling sensitive financial data (experience)

Preferred Qualifications

  • Experience in the financial services industry, particularly with banking or investment data (experience)
  • Familiarity with JP Morgan Chase's internal tools and data governance frameworks (experience)
  • Knowledge of machine learning pipelines and big data technologies like Hadoop or Kafka (experience)
  • Certifications such as AWS Certified Data Analytics or Databricks Certified Data Engineer (experience)

Responsibilities

  • Design, develop, and optimize scalable data pipelines using PySpark and AWS to process high-volume financial transaction data
  • Collaborate with cross-functional teams including data scientists and analysts to ensure data quality and availability for business intelligence
  • Implement and maintain data architectures that support real-time analytics and reporting for JP Morgan Chase's global operations
  • Test and deploy data solutions in a secure, compliant manner adhering to financial regulations like GDPR and SOX
  • Monitor and troubleshoot data pipelines to minimize downtime and ensure reliability in production environments
  • Integrate data from diverse sources such as market feeds, customer databases, and external APIs into centralized repositories
  • Contribute to data governance initiatives, including metadata management and lineage tracking for audit purposes
  • Mentor junior engineers and participate in code reviews to uphold best practices in data engineering
  • Stay updated on emerging technologies and recommend improvements to enhance data processing efficiency

Benefits

  • general: Competitive base salary and performance-based annual bonuses
  • general: Comprehensive health, dental, and vision insurance plans
  • general: 401(k) retirement savings plan with company matching contributions
  • general: Generous paid time off, including vacation, sick leave, and parental leave
  • general: Professional development opportunities, including tuition reimbursement and access to internal training programs
  • general: Employee stock purchase plan and financial wellness resources
  • general: On-site fitness centers and wellness programs at JP Morgan Chase facilities
  • general: Flexible work arrangements, including hybrid options in Plano, TX

Target Your Resume for "Data Engineer III - PySpark/AWS" , JP Morgan Chase

Get personalized recommendations to optimize your resume specifically for Data Engineer III - PySpark/AWS. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer III - PySpark/AWS" , JP Morgan Chase

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Data EngineeringFinancial ServicesBankingJP MorganData Engineering

Answer 10 quick questions to check your fit for Data Engineer III - PySpark/AWS @ JP Morgan Chase.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

JP Morgan Chase logo

Data Engineer III - PySpark/AWS

JP Morgan Chase

Software and Technology Jobs

Data Engineer III - PySpark/AWS

full-timePosted: Nov 25, 2025

Job Description

Data Engineer III - PySpark/AWS

Location: Plano, TX, United States

Job Family: Data Engineering

About the Role

At JP Morgan Chase, we are at the forefront of financial innovation, leveraging cutting-edge technology to deliver world-class services to our clients. As a Data Engineer III - PySpark/AWS in our Plano, TX office, you will play a pivotal role in developing, testing, and maintaining critical data pipelines and architectures that power our banking, investment, and asset management operations. This position within the Data Engineering category involves working with massive datasets from global markets, ensuring seamless data flow for risk analysis, fraud detection, and customer insights. You will collaborate with diverse teams to build robust, scalable solutions that drive business decisions in a highly regulated environment. Your day-to-day responsibilities will include designing efficient PySpark-based ETL processes on AWS infrastructure to handle terabytes of transactional and market data daily. You will optimize data storage in services like S3 and Redshift, implement real-time streaming with Kafka, and ensure compliance with financial standards such as Basel III and data privacy laws. By troubleshooting complex issues and automating workflows, you will contribute to reducing latency in our analytics platforms, enabling faster and more accurate financial reporting for stakeholders worldwide. We value engineers who thrive in collaborative, innovative settings and are passionate about the intersection of technology and finance. This role offers opportunities for growth within JP Morgan Chase's dynamic tech ecosystem, where you can influence strategic data initiatives that impact millions of customers. If you have a strong technical foundation and a commitment to excellence, join us in shaping the future of financial services.

Key Responsibilities

  • Design, develop, and optimize scalable data pipelines using PySpark and AWS to process high-volume financial transaction data
  • Collaborate with cross-functional teams including data scientists and analysts to ensure data quality and availability for business intelligence
  • Implement and maintain data architectures that support real-time analytics and reporting for JP Morgan Chase's global operations
  • Test and deploy data solutions in a secure, compliant manner adhering to financial regulations like GDPR and SOX
  • Monitor and troubleshoot data pipelines to minimize downtime and ensure reliability in production environments
  • Integrate data from diverse sources such as market feeds, customer databases, and external APIs into centralized repositories
  • Contribute to data governance initiatives, including metadata management and lineage tracking for audit purposes
  • Mentor junior engineers and participate in code reviews to uphold best practices in data engineering
  • Stay updated on emerging technologies and recommend improvements to enhance data processing efficiency

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred
  • 5+ years of experience in data engineering, with a focus on building and maintaining data pipelines
  • Proficiency in PySpark for large-scale data processing and ETL operations
  • Hands-on experience with AWS services including S3, EMR, Glue, and Lambda
  • Strong understanding of data modeling, warehousing, and integration in financial systems
  • Experience with version control systems like Git and CI/CD pipelines
  • Ability to work in a fast-paced, regulated environment handling sensitive financial data

Preferred Qualifications

  • Experience in the financial services industry, particularly with banking or investment data
  • Familiarity with JP Morgan Chase's internal tools and data governance frameworks
  • Knowledge of machine learning pipelines and big data technologies like Hadoop or Kafka
  • Certifications such as AWS Certified Data Analytics or Databricks Certified Data Engineer

Required Skills

  • PySpark and Spark SQL for distributed data processing
  • AWS cloud services (S3, EMR, Glue, Redshift)
  • Python programming for scripting and automation
  • SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB)
  • ETL tools and data integration frameworks
  • Data modeling and schema design
  • Version control with Git and Agile methodologies
  • Problem-solving and analytical thinking
  • Communication and collaboration in team settings
  • Knowledge of financial data standards and compliance
  • Big data technologies (Hadoop, Kafka)
  • CI/CD pipeline implementation
  • Performance tuning and optimization
  • Attention to detail in handling sensitive data
  • Adaptability to evolving regulatory requirements

Benefits

  • Competitive base salary and performance-based annual bonuses
  • Comprehensive health, dental, and vision insurance plans
  • 401(k) retirement savings plan with company matching contributions
  • Generous paid time off, including vacation, sick leave, and parental leave
  • Professional development opportunities, including tuition reimbursement and access to internal training programs
  • Employee stock purchase plan and financial wellness resources
  • On-site fitness centers and wellness programs at JP Morgan Chase facilities
  • Flexible work arrangements, including hybrid options in Plano, TX

JP Morgan Chase is an equal opportunity employer.

Locations

  • Plano, US

Salary

Estimated Salary Rangehigh confidence

160,000 - 220,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • PySpark and Spark SQL for distributed data processingintermediate
  • AWS cloud services (S3, EMR, Glue, Redshift)intermediate
  • Python programming for scripting and automationintermediate
  • SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB)intermediate
  • ETL tools and data integration frameworksintermediate
  • Data modeling and schema designintermediate
  • Version control with Git and Agile methodologiesintermediate
  • Problem-solving and analytical thinkingintermediate
  • Communication and collaboration in team settingsintermediate
  • Knowledge of financial data standards and complianceintermediate
  • Big data technologies (Hadoop, Kafka)intermediate
  • CI/CD pipeline implementationintermediate
  • Performance tuning and optimizationintermediate
  • Attention to detail in handling sensitive dataintermediate
  • Adaptability to evolving regulatory requirementsintermediate

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred (experience)
  • 5+ years of experience in data engineering, with a focus on building and maintaining data pipelines (experience)
  • Proficiency in PySpark for large-scale data processing and ETL operations (experience)
  • Hands-on experience with AWS services including S3, EMR, Glue, and Lambda (experience)
  • Strong understanding of data modeling, warehousing, and integration in financial systems (experience)
  • Experience with version control systems like Git and CI/CD pipelines (experience)
  • Ability to work in a fast-paced, regulated environment handling sensitive financial data (experience)

Preferred Qualifications

  • Experience in the financial services industry, particularly with banking or investment data (experience)
  • Familiarity with JP Morgan Chase's internal tools and data governance frameworks (experience)
  • Knowledge of machine learning pipelines and big data technologies like Hadoop or Kafka (experience)
  • Certifications such as AWS Certified Data Analytics or Databricks Certified Data Engineer (experience)

Responsibilities

  • Design, develop, and optimize scalable data pipelines using PySpark and AWS to process high-volume financial transaction data
  • Collaborate with cross-functional teams including data scientists and analysts to ensure data quality and availability for business intelligence
  • Implement and maintain data architectures that support real-time analytics and reporting for JP Morgan Chase's global operations
  • Test and deploy data solutions in a secure, compliant manner adhering to financial regulations like GDPR and SOX
  • Monitor and troubleshoot data pipelines to minimize downtime and ensure reliability in production environments
  • Integrate data from diverse sources such as market feeds, customer databases, and external APIs into centralized repositories
  • Contribute to data governance initiatives, including metadata management and lineage tracking for audit purposes
  • Mentor junior engineers and participate in code reviews to uphold best practices in data engineering
  • Stay updated on emerging technologies and recommend improvements to enhance data processing efficiency

Benefits

  • general: Competitive base salary and performance-based annual bonuses
  • general: Comprehensive health, dental, and vision insurance plans
  • general: 401(k) retirement savings plan with company matching contributions
  • general: Generous paid time off, including vacation, sick leave, and parental leave
  • general: Professional development opportunities, including tuition reimbursement and access to internal training programs
  • general: Employee stock purchase plan and financial wellness resources
  • general: On-site fitness centers and wellness programs at JP Morgan Chase facilities
  • general: Flexible work arrangements, including hybrid options in Plano, TX

Target Your Resume for "Data Engineer III - PySpark/AWS" , JP Morgan Chase

Get personalized recommendations to optimize your resume specifically for Data Engineer III - PySpark/AWS. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer III - PySpark/AWS" , JP Morgan Chase

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Data EngineeringFinancial ServicesBankingJP MorganData Engineering

Answer 10 quick questions to check your fit for Data Engineer III - PySpark/AWS @ JP Morgan Chase.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.