Resume and JobRESUME AND JOB
Apple logo

Data Engineer, Data Solutions & Initiatives

Apple

Software and Technology Jobs

Data Engineer, Data Solutions & Initiatives

full-timePosted: Oct 17, 2025

Job Description

The people here at Apple don't just create products - they create the kind of wonder that's revolutionized entire industries. It's the diversity of those people and their ideas that inspires the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it. We work in a startup atmosphere where individuals take ownership and have significant impact on the final product. We are a dynamic team within Apple’s Worldwide Sales organization, Data Solutions & Initiatives—focused on driving innovation through product design, engineering, and portfolio management. In our startup-like environment, we move quickly, experiment boldly, and expect our team to take full ownership of what they deliver. We’re looking for a hands-on Software Engineer to build and operate the data infrastructure that powers analytics, automation, and AI across our business. You’ll work on distributed data systems, cloud-native services, and internal tooling that makes data discoverable, trustworthy, and ready for intelligent applications. This role is part of our Singapore engineering hub and works closely with our US-based team to deliver reliable, scalable data solutions that fuel decision-making, modeling, and business operations. Design, build, and operate scalable, cloud-native data pipelines and services that deliver high-quality, domain-owned data products. Implement data mesh principles by helping domains publish, discover, and consume data products using shared standards and infrastructure. Build and maintain real-time and batch pipelines using tools like Spark, Kafka, and Airflow, ensuring reliability and performance at scale. Develop metadata, lineage, and catalog integrations so data products are easily discoverable and trusted across domains. Work directly with data producers and consumers to define schemas, contracts, and access patterns that improve interoperability. Automate testing, validation, and deployment through CI/CD pipelines to ensure fast, consistent delivery of data products. Monitor and troubleshoot data pipelines and systems, driving improvements in observability, scalability, and cost efficiency. Collaborate closely with platform engineers to enhance self-serve tooling and streamline onboarding for new data domains.

Locations

  • Singapore, Singapore, Singapore 569141

Salary

Estimated Salary Rangemedium confidence

25,000,000 - 45,000,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • product designintermediate
  • engineeringintermediate
  • portfolio managementintermediate
  • Software Engineeringintermediate
  • building data infrastructureintermediate
  • operating data infrastructureintermediate
  • distributed data systemsintermediate
  • cloud-native servicesintermediate
  • internal toolingintermediate
  • designing scalable data pipelinesintermediate
  • building scalable data pipelinesintermediate
  • operating scalable data pipelinesintermediate
  • data mesh principlesintermediate
  • publishing data productsintermediate
  • discovering data productsintermediate
  • consuming data productsintermediate
  • Sparkintermediate
  • Kafkaintermediate
  • Airflowintermediate
  • building real-time pipelinesintermediate
  • building batch pipelinesintermediate
  • metadata managementintermediate
  • lineage managementintermediate
  • catalog integrationsintermediate
  • defining schemasintermediate
  • defining contractsintermediate
  • defining access patternsintermediate
  • improving interoperabilityintermediate
  • automating testingintermediate
  • automating validationintermediate
  • automating deploymentintermediate
  • CI/CD pipelinesintermediate
  • monitoring data pipelinesintermediate
  • troubleshooting data pipelinesintermediate
  • observabilityintermediate
  • scalabilityintermediate
  • cost efficiencyintermediate
  • collaborationintermediate
  • platform engineeringintermediate
  • self-serve toolingintermediate
  • streamlining onboardingintermediate

Required Qualifications

  • 5+ years of experience building and operating data pipelines and distributed systems in cloud environments (AWS, GCP, or Azure). (experience, 5 years)
  • Hands-on experience implementing data mesh concepts — data products, domain ownership, federated standards, and self-service patterns. (experience)
  • Strong programming skills in Python, Scala, or Java for developing scalable ETL/ELT and data services. (experience)
  • Expert-level SQL and experience with modern data warehouses (e.g., Snowflake, BigQuery, Redshift). (experience)
  • Proven experience with streaming and orchestration frameworks (e.g., Kafka, Spark, Airflow, dbt). (experience)
  • Practical knowledge of Kubernetes, containerization, and CI/CD automation for data engineering workflows. (experience)
  • Experience supporting AI/ML data enablement, including feature pipelines, vector databases, and model-serving data requirements. (experience)

Preferred Qualifications

  • Strong understanding of data quality, observability, and schema versioning in distributed environments. (experience)
  • Experience implementing or consuming data catalogs and governance frameworks (e.g., DataHub, Amundsen, Collibra). (experience)
  • Familiarity with open table formats (Iceberg, Delta, Hudi) and lakehouse architectures. (experience)
  • Experience building APIs or SDKs for data product publishing and consumption. (experience)
  • Exposure to self-serve analytics tools (Looker, Tableau, Streamlit) and BI use cases. (experience)
  • Passion for automation, clean code, and continuous learning in fast-moving data ecosystems. (experience)

Responsibilities

  • Design, build, and operate scalable, cloud-native data pipelines and services that deliver high-quality, domain-owned data products. Implement data mesh principles by helping domains publish, discover, and consume data products using shared standards and infrastructure. Build and maintain real-time and batch pipelines using tools like Spark, Kafka, and Airflow, ensuring reliability and performance at scale. Develop metadata, lineage, and catalog integrations so data products are easily discoverable and trusted across domains. Work directly with data producers and consumers to define schemas, contracts, and access patterns that improve interoperability. Automate testing, validation, and deployment through CI/CD pipelines to ensure fast, consistent delivery of data products. Monitor and troubleshoot data pipelines and systems, driving improvements in observability, scalability, and cost efficiency. Collaborate closely with platform engineers to enhance self-serve tooling and streamline onboarding for new data domains.

Target Your Resume for "Data Engineer, Data Solutions & Initiatives" , Apple

Get personalized recommendations to optimize your resume specifically for Data Engineer, Data Solutions & Initiatives. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer, Data Solutions & Initiatives" , Apple

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Hardware

Answer 10 quick questions to check your fit for Data Engineer, Data Solutions & Initiatives @ Apple.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

Apple logo

Data Engineer, Data Solutions & Initiatives

Apple

Software and Technology Jobs

Data Engineer, Data Solutions & Initiatives

full-timePosted: Oct 17, 2025

Job Description

The people here at Apple don't just create products - they create the kind of wonder that's revolutionized entire industries. It's the diversity of those people and their ideas that inspires the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it. We work in a startup atmosphere where individuals take ownership and have significant impact on the final product. We are a dynamic team within Apple’s Worldwide Sales organization, Data Solutions & Initiatives—focused on driving innovation through product design, engineering, and portfolio management. In our startup-like environment, we move quickly, experiment boldly, and expect our team to take full ownership of what they deliver. We’re looking for a hands-on Software Engineer to build and operate the data infrastructure that powers analytics, automation, and AI across our business. You’ll work on distributed data systems, cloud-native services, and internal tooling that makes data discoverable, trustworthy, and ready for intelligent applications. This role is part of our Singapore engineering hub and works closely with our US-based team to deliver reliable, scalable data solutions that fuel decision-making, modeling, and business operations. Design, build, and operate scalable, cloud-native data pipelines and services that deliver high-quality, domain-owned data products. Implement data mesh principles by helping domains publish, discover, and consume data products using shared standards and infrastructure. Build and maintain real-time and batch pipelines using tools like Spark, Kafka, and Airflow, ensuring reliability and performance at scale. Develop metadata, lineage, and catalog integrations so data products are easily discoverable and trusted across domains. Work directly with data producers and consumers to define schemas, contracts, and access patterns that improve interoperability. Automate testing, validation, and deployment through CI/CD pipelines to ensure fast, consistent delivery of data products. Monitor and troubleshoot data pipelines and systems, driving improvements in observability, scalability, and cost efficiency. Collaborate closely with platform engineers to enhance self-serve tooling and streamline onboarding for new data domains.

Locations

  • Singapore, Singapore, Singapore 569141

Salary

Estimated Salary Rangemedium confidence

25,000,000 - 45,000,000 INR / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • product designintermediate
  • engineeringintermediate
  • portfolio managementintermediate
  • Software Engineeringintermediate
  • building data infrastructureintermediate
  • operating data infrastructureintermediate
  • distributed data systemsintermediate
  • cloud-native servicesintermediate
  • internal toolingintermediate
  • designing scalable data pipelinesintermediate
  • building scalable data pipelinesintermediate
  • operating scalable data pipelinesintermediate
  • data mesh principlesintermediate
  • publishing data productsintermediate
  • discovering data productsintermediate
  • consuming data productsintermediate
  • Sparkintermediate
  • Kafkaintermediate
  • Airflowintermediate
  • building real-time pipelinesintermediate
  • building batch pipelinesintermediate
  • metadata managementintermediate
  • lineage managementintermediate
  • catalog integrationsintermediate
  • defining schemasintermediate
  • defining contractsintermediate
  • defining access patternsintermediate
  • improving interoperabilityintermediate
  • automating testingintermediate
  • automating validationintermediate
  • automating deploymentintermediate
  • CI/CD pipelinesintermediate
  • monitoring data pipelinesintermediate
  • troubleshooting data pipelinesintermediate
  • observabilityintermediate
  • scalabilityintermediate
  • cost efficiencyintermediate
  • collaborationintermediate
  • platform engineeringintermediate
  • self-serve toolingintermediate
  • streamlining onboardingintermediate

Required Qualifications

  • 5+ years of experience building and operating data pipelines and distributed systems in cloud environments (AWS, GCP, or Azure). (experience, 5 years)
  • Hands-on experience implementing data mesh concepts — data products, domain ownership, federated standards, and self-service patterns. (experience)
  • Strong programming skills in Python, Scala, or Java for developing scalable ETL/ELT and data services. (experience)
  • Expert-level SQL and experience with modern data warehouses (e.g., Snowflake, BigQuery, Redshift). (experience)
  • Proven experience with streaming and orchestration frameworks (e.g., Kafka, Spark, Airflow, dbt). (experience)
  • Practical knowledge of Kubernetes, containerization, and CI/CD automation for data engineering workflows. (experience)
  • Experience supporting AI/ML data enablement, including feature pipelines, vector databases, and model-serving data requirements. (experience)

Preferred Qualifications

  • Strong understanding of data quality, observability, and schema versioning in distributed environments. (experience)
  • Experience implementing or consuming data catalogs and governance frameworks (e.g., DataHub, Amundsen, Collibra). (experience)
  • Familiarity with open table formats (Iceberg, Delta, Hudi) and lakehouse architectures. (experience)
  • Experience building APIs or SDKs for data product publishing and consumption. (experience)
  • Exposure to self-serve analytics tools (Looker, Tableau, Streamlit) and BI use cases. (experience)
  • Passion for automation, clean code, and continuous learning in fast-moving data ecosystems. (experience)

Responsibilities

  • Design, build, and operate scalable, cloud-native data pipelines and services that deliver high-quality, domain-owned data products. Implement data mesh principles by helping domains publish, discover, and consume data products using shared standards and infrastructure. Build and maintain real-time and batch pipelines using tools like Spark, Kafka, and Airflow, ensuring reliability and performance at scale. Develop metadata, lineage, and catalog integrations so data products are easily discoverable and trusted across domains. Work directly with data producers and consumers to define schemas, contracts, and access patterns that improve interoperability. Automate testing, validation, and deployment through CI/CD pipelines to ensure fast, consistent delivery of data products. Monitor and troubleshoot data pipelines and systems, driving improvements in observability, scalability, and cost efficiency. Collaborate closely with platform engineers to enhance self-serve tooling and streamline onboarding for new data domains.

Target Your Resume for "Data Engineer, Data Solutions & Initiatives" , Apple

Get personalized recommendations to optimize your resume specifically for Data Engineer, Data Solutions & Initiatives. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Data Engineer, Data Solutions & Initiatives" , Apple

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Hardware

Answer 10 quick questions to check your fit for Data Engineer, Data Solutions & Initiatives @ Apple.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.