Resume and JobRESUME AND JOB
JP Morgan Chase logo

Data Engineer III

JP Morgan Chase

Data Engineer III

JP Morgan Chase logo

JP Morgan Chase

full-time

Posted: December 11, 2025

Number of Vacancies: 1

Job Description

Data Engineer III

Location: Hyderabad, Telangana, India

Job Family: Data Engineering

About the Role

At JP Morgan Chase, we are a leading global financial services firm with operations spanning investment banking, consumer and community banking, commercial banking, and asset and wealth management. Our Data Engineering team plays a pivotal role in leveraging data to drive innovation, manage risks, and deliver superior client experiences. As a Data Engineer III in our Hyderabad office, you will be at the forefront of developing robust data pipelines and architectures that support critical functions across our diverse business lines. This role requires a blend of technical expertise and domain knowledge in financial services to handle complex datasets from global markets, ensuring seamless integration and actionable insights for decision-makers. In this position, you will design and implement scalable data solutions using cutting-edge technologies like Apache Spark, Kafka, and cloud platforms such as AWS. You will collaborate with data scientists, analysts, and business units to translate financial requirements—such as real-time fraud detection, portfolio analytics, or regulatory reporting—into efficient data workflows. Responsibilities include optimizing ETL processes for high-volume transaction data, ensuring compliance with stringent regulations like Basel III and data privacy laws, and proactively identifying opportunities to enhance system performance and reliability. Your work will directly impact JP Morgan's ability to navigate the dynamic financial landscape while maintaining the highest standards of data governance and security. We seek a proactive professional passionate about data engineering in a high-stakes environment. With 5+ years of experience, you will thrive in our collaborative culture, contributing to innovative projects that redefine how we harness data for competitive advantage. Join JP Morgan Chase in Hyderabad, a hub for our technology and operations excellence in India, and advance your career while helping shape the future of global finance.

Key Responsibilities

  • Design, develop, and optimize scalable data pipelines to support JP Morgan Chase's financial analytics and reporting needs
  • Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to understand data requirements
  • Implement and maintain data architectures using tools like Spark, Hadoop, and cloud services to handle high-volume financial transactions
  • Ensure data quality, integrity, and compliance with regulatory standards such as GDPR and SOX in the banking sector
  • Test and deploy data solutions, troubleshooting issues in production environments to minimize downtime
  • Integrate data from diverse sources including market feeds, transaction systems, and external APIs
  • Monitor and performance-tune data systems to support real-time decision-making in investment banking and asset management
  • Contribute to innovation in data engineering practices, such as adopting serverless architectures for cost efficiency
  • Document data pipelines and architectures to facilitate knowledge sharing within the team
  • Stay updated on emerging technologies and apply them to enhance JP Morgan's data infrastructure

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; Master's degree preferred
  • 5+ years of experience in data engineering roles, with a focus on building and maintaining data pipelines
  • Proficiency in programming languages such as Python, Java, or Scala for data processing
  • Strong experience with big data technologies like Hadoop, Spark, and Kafka
  • Knowledge of cloud platforms such as AWS, Azure, or Google Cloud for scalable data solutions
  • Experience in ETL processes and data modeling in a financial services environment
  • Ability to work in a fast-paced, regulated industry with a focus on data security and compliance

Preferred Qualifications

  • Experience in financial services or banking sector, particularly with regulatory data handling
  • Certification in cloud technologies (e.g., AWS Certified Data Analytics) or big data tools
  • Familiarity with machine learning pipelines and integration with data engineering workflows
  • Prior work at a large-scale financial institution like JP Morgan Chase
  • Advanced degree in Data Science or related field with publications or projects in data architecture

Required Skills

  • Expertise in Python, Java, or Scala for data pipeline development
  • Proficiency with Apache Spark and Hadoop for big data processing
  • Experience with ETL tools like Apache Airflow or Talend
  • Knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB)
  • Cloud computing skills in AWS, Azure, or GCP
  • Data modeling and warehousing techniques (e.g., Snowflake, Redshift)
  • Familiarity with Kafka or similar for real-time data streaming
  • Understanding of financial data concepts like risk modeling and transaction processing
  • Strong problem-solving and analytical skills
  • Excellent communication and collaboration abilities
  • Agile methodologies and version control with Git
  • Data security practices including encryption and access controls
  • Performance optimization for large-scale datasets
  • Scripting and automation for DevOps integration
  • Adaptability to regulatory changes in the financial industry

Benefits

  • Competitive base salary and performance-based bonuses aligned with financial services industry standards
  • Comprehensive health, dental, and vision insurance coverage for employees and dependents
  • Retirement savings plan with generous company matching contributions
  • Paid time off including vacation, sick leave, and parental leave policies
  • Professional development opportunities through JP Morgan's internal training programs and certifications
  • Employee wellness programs including gym memberships and mental health support
  • Flexible work arrangements with hybrid options in Hyderabad office
  • Stock purchase plan and other financial perks tailored to banking professionals

JP Morgan Chase is an equal opportunity employer.

Locations

  • Hyderabad, IN

Salary

Estimated Salary Rangemedium confidence

3,000,000 - 5,000,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Expertise in Python, Java, or Scala for data pipeline developmentintermediate
  • Proficiency with Apache Spark and Hadoop for big data processingintermediate
  • Experience with ETL tools like Apache Airflow or Talendintermediate
  • Knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB)intermediate
  • Cloud computing skills in AWS, Azure, or GCPintermediate
  • Data modeling and warehousing techniques (e.g., Snowflake, Redshift)intermediate
  • Familiarity with Kafka or similar for real-time data streamingintermediate
  • Understanding of financial data concepts like risk modeling and transaction processingintermediate
  • Strong problem-solving and analytical skillsintermediate
  • Excellent communication and collaboration abilitiesintermediate
  • Agile methodologies and version control with Gitintermediate
  • Data security practices including encryption and access controlsintermediate
  • Performance optimization for large-scale datasetsintermediate
  • Scripting and automation for DevOps integrationintermediate
  • Adaptability to regulatory changes in the financial industryintermediate

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; Master's degree preferred (experience)
  • 5+ years of experience in data engineering roles, with a focus on building and maintaining data pipelines (experience)
  • Proficiency in programming languages such as Python, Java, or Scala for data processing (experience)
  • Strong experience with big data technologies like Hadoop, Spark, and Kafka (experience)
  • Knowledge of cloud platforms such as AWS, Azure, or Google Cloud for scalable data solutions (experience)
  • Experience in ETL processes and data modeling in a financial services environment (experience)
  • Ability to work in a fast-paced, regulated industry with a focus on data security and compliance (experience)

Preferred Qualifications

  • Experience in financial services or banking sector, particularly with regulatory data handling (experience)
  • Certification in cloud technologies (e.g., AWS Certified Data Analytics) or big data tools (experience)
  • Familiarity with machine learning pipelines and integration with data engineering workflows (experience)
  • Prior work at a large-scale financial institution like JP Morgan Chase (experience)
  • Advanced degree in Data Science or related field with publications or projects in data architecture (experience)

Responsibilities

  • Design, develop, and optimize scalable data pipelines to support JP Morgan Chase's financial analytics and reporting needs
  • Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to understand data requirements
  • Implement and maintain data architectures using tools like Spark, Hadoop, and cloud services to handle high-volume financial transactions
  • Ensure data quality, integrity, and compliance with regulatory standards such as GDPR and SOX in the banking sector
  • Test and deploy data solutions, troubleshooting issues in production environments to minimize downtime
  • Integrate data from diverse sources including market feeds, transaction systems, and external APIs
  • Monitor and performance-tune data systems to support real-time decision-making in investment banking and asset management
  • Contribute to innovation in data engineering practices, such as adopting serverless architectures for cost efficiency
  • Document data pipelines and architectures to facilitate knowledge sharing within the team
  • Stay updated on emerging technologies and apply them to enhance JP Morgan's data infrastructure

Benefits

  • general: Competitive base salary and performance-based bonuses aligned with financial services industry standards
  • general: Comprehensive health, dental, and vision insurance coverage for employees and dependents
  • general: Retirement savings plan with generous company matching contributions
  • general: Paid time off including vacation, sick leave, and parental leave policies
  • general: Professional development opportunities through JP Morgan's internal training programs and certifications
  • general: Employee wellness programs including gym memberships and mental health support
  • general: Flexible work arrangements with hybrid options in Hyderabad office
  • general: Stock purchase plan and other financial perks tailored to banking professionals

Target Your Resume for "Data Engineer III" , JP Morgan Chase

Get personalized recommendations to optimize your resume specifically for Data Engineer III. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer III" , JP Morgan Chase

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Data EngineeringFinancial ServicesBankingJP MorganData Engineering

Related Jobs You May Like

No related jobs found at the moment.

JP Morgan Chase logo

Data Engineer III

JP Morgan Chase

Data Engineer III

JP Morgan Chase logo

JP Morgan Chase

full-time

Posted: December 11, 2025

Number of Vacancies: 1

Job Description

Data Engineer III

Location: Hyderabad, Telangana, India

Job Family: Data Engineering

About the Role

At JP Morgan Chase, we are a leading global financial services firm with operations spanning investment banking, consumer and community banking, commercial banking, and asset and wealth management. Our Data Engineering team plays a pivotal role in leveraging data to drive innovation, manage risks, and deliver superior client experiences. As a Data Engineer III in our Hyderabad office, you will be at the forefront of developing robust data pipelines and architectures that support critical functions across our diverse business lines. This role requires a blend of technical expertise and domain knowledge in financial services to handle complex datasets from global markets, ensuring seamless integration and actionable insights for decision-makers. In this position, you will design and implement scalable data solutions using cutting-edge technologies like Apache Spark, Kafka, and cloud platforms such as AWS. You will collaborate with data scientists, analysts, and business units to translate financial requirements—such as real-time fraud detection, portfolio analytics, or regulatory reporting—into efficient data workflows. Responsibilities include optimizing ETL processes for high-volume transaction data, ensuring compliance with stringent regulations like Basel III and data privacy laws, and proactively identifying opportunities to enhance system performance and reliability. Your work will directly impact JP Morgan's ability to navigate the dynamic financial landscape while maintaining the highest standards of data governance and security. We seek a proactive professional passionate about data engineering in a high-stakes environment. With 5+ years of experience, you will thrive in our collaborative culture, contributing to innovative projects that redefine how we harness data for competitive advantage. Join JP Morgan Chase in Hyderabad, a hub for our technology and operations excellence in India, and advance your career while helping shape the future of global finance.

Key Responsibilities

  • Design, develop, and optimize scalable data pipelines to support JP Morgan Chase's financial analytics and reporting needs
  • Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to understand data requirements
  • Implement and maintain data architectures using tools like Spark, Hadoop, and cloud services to handle high-volume financial transactions
  • Ensure data quality, integrity, and compliance with regulatory standards such as GDPR and SOX in the banking sector
  • Test and deploy data solutions, troubleshooting issues in production environments to minimize downtime
  • Integrate data from diverse sources including market feeds, transaction systems, and external APIs
  • Monitor and performance-tune data systems to support real-time decision-making in investment banking and asset management
  • Contribute to innovation in data engineering practices, such as adopting serverless architectures for cost efficiency
  • Document data pipelines and architectures to facilitate knowledge sharing within the team
  • Stay updated on emerging technologies and apply them to enhance JP Morgan's data infrastructure

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; Master's degree preferred
  • 5+ years of experience in data engineering roles, with a focus on building and maintaining data pipelines
  • Proficiency in programming languages such as Python, Java, or Scala for data processing
  • Strong experience with big data technologies like Hadoop, Spark, and Kafka
  • Knowledge of cloud platforms such as AWS, Azure, or Google Cloud for scalable data solutions
  • Experience in ETL processes and data modeling in a financial services environment
  • Ability to work in a fast-paced, regulated industry with a focus on data security and compliance

Preferred Qualifications

  • Experience in financial services or banking sector, particularly with regulatory data handling
  • Certification in cloud technologies (e.g., AWS Certified Data Analytics) or big data tools
  • Familiarity with machine learning pipelines and integration with data engineering workflows
  • Prior work at a large-scale financial institution like JP Morgan Chase
  • Advanced degree in Data Science or related field with publications or projects in data architecture

Required Skills

  • Expertise in Python, Java, or Scala for data pipeline development
  • Proficiency with Apache Spark and Hadoop for big data processing
  • Experience with ETL tools like Apache Airflow or Talend
  • Knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB)
  • Cloud computing skills in AWS, Azure, or GCP
  • Data modeling and warehousing techniques (e.g., Snowflake, Redshift)
  • Familiarity with Kafka or similar for real-time data streaming
  • Understanding of financial data concepts like risk modeling and transaction processing
  • Strong problem-solving and analytical skills
  • Excellent communication and collaboration abilities
  • Agile methodologies and version control with Git
  • Data security practices including encryption and access controls
  • Performance optimization for large-scale datasets
  • Scripting and automation for DevOps integration
  • Adaptability to regulatory changes in the financial industry

Benefits

  • Competitive base salary and performance-based bonuses aligned with financial services industry standards
  • Comprehensive health, dental, and vision insurance coverage for employees and dependents
  • Retirement savings plan with generous company matching contributions
  • Paid time off including vacation, sick leave, and parental leave policies
  • Professional development opportunities through JP Morgan's internal training programs and certifications
  • Employee wellness programs including gym memberships and mental health support
  • Flexible work arrangements with hybrid options in Hyderabad office
  • Stock purchase plan and other financial perks tailored to banking professionals

JP Morgan Chase is an equal opportunity employer.

Locations

  • Hyderabad, IN

Salary

Estimated Salary Rangemedium confidence

3,000,000 - 5,000,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Expertise in Python, Java, or Scala for data pipeline developmentintermediate
  • Proficiency with Apache Spark and Hadoop for big data processingintermediate
  • Experience with ETL tools like Apache Airflow or Talendintermediate
  • Knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB)intermediate
  • Cloud computing skills in AWS, Azure, or GCPintermediate
  • Data modeling and warehousing techniques (e.g., Snowflake, Redshift)intermediate
  • Familiarity with Kafka or similar for real-time data streamingintermediate
  • Understanding of financial data concepts like risk modeling and transaction processingintermediate
  • Strong problem-solving and analytical skillsintermediate
  • Excellent communication and collaboration abilitiesintermediate
  • Agile methodologies and version control with Gitintermediate
  • Data security practices including encryption and access controlsintermediate
  • Performance optimization for large-scale datasetsintermediate
  • Scripting and automation for DevOps integrationintermediate
  • Adaptability to regulatory changes in the financial industryintermediate

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; Master's degree preferred (experience)
  • 5+ years of experience in data engineering roles, with a focus on building and maintaining data pipelines (experience)
  • Proficiency in programming languages such as Python, Java, or Scala for data processing (experience)
  • Strong experience with big data technologies like Hadoop, Spark, and Kafka (experience)
  • Knowledge of cloud platforms such as AWS, Azure, or Google Cloud for scalable data solutions (experience)
  • Experience in ETL processes and data modeling in a financial services environment (experience)
  • Ability to work in a fast-paced, regulated industry with a focus on data security and compliance (experience)

Preferred Qualifications

  • Experience in financial services or banking sector, particularly with regulatory data handling (experience)
  • Certification in cloud technologies (e.g., AWS Certified Data Analytics) or big data tools (experience)
  • Familiarity with machine learning pipelines and integration with data engineering workflows (experience)
  • Prior work at a large-scale financial institution like JP Morgan Chase (experience)
  • Advanced degree in Data Science or related field with publications or projects in data architecture (experience)

Responsibilities

  • Design, develop, and optimize scalable data pipelines to support JP Morgan Chase's financial analytics and reporting needs
  • Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to understand data requirements
  • Implement and maintain data architectures using tools like Spark, Hadoop, and cloud services to handle high-volume financial transactions
  • Ensure data quality, integrity, and compliance with regulatory standards such as GDPR and SOX in the banking sector
  • Test and deploy data solutions, troubleshooting issues in production environments to minimize downtime
  • Integrate data from diverse sources including market feeds, transaction systems, and external APIs
  • Monitor and performance-tune data systems to support real-time decision-making in investment banking and asset management
  • Contribute to innovation in data engineering practices, such as adopting serverless architectures for cost efficiency
  • Document data pipelines and architectures to facilitate knowledge sharing within the team
  • Stay updated on emerging technologies and apply them to enhance JP Morgan's data infrastructure

Benefits

  • general: Competitive base salary and performance-based bonuses aligned with financial services industry standards
  • general: Comprehensive health, dental, and vision insurance coverage for employees and dependents
  • general: Retirement savings plan with generous company matching contributions
  • general: Paid time off including vacation, sick leave, and parental leave policies
  • general: Professional development opportunities through JP Morgan's internal training programs and certifications
  • general: Employee wellness programs including gym memberships and mental health support
  • general: Flexible work arrangements with hybrid options in Hyderabad office
  • general: Stock purchase plan and other financial perks tailored to banking professionals

Target Your Resume for "Data Engineer III" , JP Morgan Chase

Get personalized recommendations to optimize your resume specifically for Data Engineer III. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer III" , JP Morgan Chase

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Data EngineeringFinancial ServicesBankingJP MorganData Engineering

Related Jobs You May Like

No related jobs found at the moment.