Resume and JobRESUME AND JOB
Salesforce logo

ML Platform Engineer - Feature Store, LMTS

Salesforce

Software and Technology Jobs

ML Platform Engineer - Feature Store, LMTS

full-timePosted: Oct 27, 2025

Job Description

Description We are seeking a highly skilled and motivated AI Platform Engineer with a focus on Feature Store development and management to join our growing AI/ML platform team. In this role, you will design, build, and scale the data and infrastructure components that power our machine learning ecosystem – enabling consistent, reliable, and real-time access to features across development, training, and production environments. You’ll collaborate closely with data scientists, ML engineers, and data platform teams to streamline feature engineering workflows and ensure seamless integration between offline and online data sources.you’ll be expected to work across multiple domains including data architecture, distributed systems, software engineering, and MLOps. You will help define and implement best practices for feature registration, drift, governance, lineage tracking, and versioning, all while contributing to the CI/CD automation that supports feature deployment across environments.What You’ll DoKey Responsibilities:Feature Store Design & Development: Architect, implement, and maintain a scalable feature store serving offline (batch), online (real-time), and streaming ML use cases.Ecosystem Integration: Build robust integrations between the feature store and ML ecosystem components such as data pipelines, model training workflows, model registry, and model serving infrastructure.Streaming & Real-Time Data Processing: Design and manage streaming pipelines using technologies like Kafka, Kinesis, or Flink to enable low-latency feature generation and real-time inference.Feature Governance & Lineage: Define and enforce governance standards for feature registration, metadata management, lineage tracking, and versioning to ensure data consistency and reusability.Collaboration with ML Teams: Partner with data scientists and ML engineers to streamline feature discovery, definition, and deployment workflows, ensuring reproducibility and efficient model iteration.Data Pipeline Engineering: Build and optimize ingestion and transformation pipelines that handle large-scale data while maintaining accuracy, reliability, and freshness.CI/CD Automation: Implement CI/CD workflows and infrastructure-as-code to automate feature store provisioning and feature promotion across environments (Dev → QA → Prod).Cloud Infrastructure & Security: Collaborate with platform and DevOps teams to ensure secure, scalable, and cost-effective operation of feature store and streaming infrastructure in cloud environments.Monitoring & Observability: Develop monitoring and alerting frameworks to track feature data quality, latency, and freshness across offline, online, and streaming systems.What We’re Looking ForBachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.5+ years of experience in data engineering, platform engineering, or MLOps roles.Strong proficiency in Python and familiarity with distributed data frameworks such as Airflow, Spark or Flink.Hands-on experience with feature store technologies (e.g., Feast, SageMaker Feature Store, Tecton, Databricks Feature Store, or custom implementations).Experience with cloud data warehouse (e.g., snowflake) and transformation framework (e.g. dbt) for data modeling, transformation and feature computation in batch environment. Expertise in streaming data platforms (e.g., Kafka, Kinesis, Flink) and real-time data processing architectures.Experience with cloud environments (AWS preferred) and infrastructure-as-code tools (Terraform, CloudFormation).Strong understanding of CI/CD automation, containerization (Docker, Kubernetes), and API-driven integration patterns.Knowledge of data governance, lineage tracking, and feature lifecycle management best practices.Excellent communication skills, a collaborative mindset, and a strong sense of ownership.Preferred Qualifications (Bonus Points):Experience with Salesforce EcosystemOpen-source contributions or experience in feature store ecosystem developmentExperience with unstructured databases(vector or graph databases) and RAG pipelines Experience with context engineering including structuring data, prompts, and logic for AI systems, managing memory and external knowledge, etc

Locations

  • New York, New York

Salary

Estimated Salary Rangehigh confidence

180,000 - 250,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • proficiency in Pythonintermediate
  • familiarity with distributed data frameworks such as Airflow, Spark or Flinkintermediate
  • experience with feature store technologies (e.g., Feast, SageMaker Feature Store, Tecton, Databricks Feature Store, or custom implementations)intermediate
  • experience with cloud data warehouse (e.g., snowflake)intermediate
  • experience with transformation framework (e.g. dbt)intermediate
  • expertise in streaming data platforms (e.g., Kafka, Kinesis, Flink)intermediate
  • experience with real-time data processing architecturesintermediate
  • experience with cloud environments (AWS preferred)intermediate
  • experience with infrastructure-as-code tools (Terraform, CloudFormation)intermediate
  • understanding of CI/CD automationintermediate
  • understanding of containerization (Docker, Kubernetes)intermediate
  • understanding of API-driven integration patternsintermediate
  • knowledge of data governanceintermediate
  • knowledge of lineage trackingintermediate
  • knowledge of feature lifecycle management best practicesintermediate

Required Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. (degree in master)
  • 5+ years of experience in data engineering, platform engineering, or MLOps roles. (experience, 5 years)
  • Strong proficiency in Python and familiarity with distributed data frameworks such as Airflow, Spark or Flink. (experience)
  • Hands-on experience with feature store technologies (e.g., Feast, SageMaker Feature Store, Tecton, Databricks Feature Store, or custom implementations). (experience)
  • Experience with cloud data warehouse (e.g., snowflake) and transformation framework (e.g. dbt) for data modeling, transformation and feature computation in batch environment. (experience)
  • Expertise in streaming data platforms (e.g., Kafka, Kinesis, Flink) and real-time data processing architectures. (experience)
  • Experience with cloud environments (AWS preferred) and infrastructure-as-code tools (Terraform, CloudFormation). (experience)
  • Strong understanding of CI/CD automation, containerization (Docker, Kubernetes), and API-driven integration patterns. (experience)
  • Knowledge of data governance, lineage tracking, and feature lifecycle management best practices. (experience)
  • Excellent communication skills, a collaborative mindset, and a strong sense of ownership. (experience)

Preferred Qualifications

  • Experience with Salesforce Ecosystem (experience)
  • Open-source contributions or experience in feature store ecosystem develop (experience)

Responsibilities

  • Feature Store Design & Development: Architect, implement, and maintain a scalable feature store serving offline (batch), online (real-time), and streaming ML use cases.
  • Ecosystem Integration: Build robust integrations between the feature store and ML ecosystem components such as data pipelines, model training workflows, model registry, and model serving infrastructure.
  • Streaming & Real-Time Data Processing: Design and manage streaming pipelines using technologies like Kafka, Kinesis, or Flink to enable low-latency feature generation and real-time inference.
  • Feature Governance & Lineage: Define and enforce governance standards for feature registration, metadata management, lineage tracking, and versioning to ensure data consistency and reusability.
  • Collaboration with ML Teams: Partner with data scientists and ML engineers to streamline feature discovery, definition, and deployment workflows, ensuring reproducibility and efficient model iteration.
  • Data Pipeline Engineering: Build and optimize ingestion and transformation pipelines that handle large-scale data while maintaining accuracy, reliability, and freshness.
  • CI/CD Automation: Implement CI/CD workflows and infrastructure-as-code to automate feature store provisioning and feature promotion across environments (Dev → QA → Prod).
  • Cloud Infrastructure & Security: Collaborate with platform and DevOps teams to ensure secure, scalable, and cost-effective operation of feature store and streaming infrastructure in cloud environments.
  • Monitoring & Observability: Develop monitoring and alerting frameworks to track feature data quality, latency, and freshness across offline, online, and streaming systems.

Target Your Resume for "ML Platform Engineer - Feature Store, LMTS" , Salesforce

Get personalized recommendations to optimize your resume specifically for ML Platform Engineer - Feature Store, LMTS. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "ML Platform Engineer - Feature Store, LMTS" , Salesforce

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringSoftware Engineering

Answer 10 quick questions to check your fit for ML Platform Engineer - Feature Store, LMTS @ Salesforce.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

Salesforce logo

ML Platform Engineer - Feature Store, LMTS

Salesforce

Software and Technology Jobs

ML Platform Engineer - Feature Store, LMTS

full-timePosted: Oct 27, 2025

Job Description

Description We are seeking a highly skilled and motivated AI Platform Engineer with a focus on Feature Store development and management to join our growing AI/ML platform team. In this role, you will design, build, and scale the data and infrastructure components that power our machine learning ecosystem – enabling consistent, reliable, and real-time access to features across development, training, and production environments. You’ll collaborate closely with data scientists, ML engineers, and data platform teams to streamline feature engineering workflows and ensure seamless integration between offline and online data sources.you’ll be expected to work across multiple domains including data architecture, distributed systems, software engineering, and MLOps. You will help define and implement best practices for feature registration, drift, governance, lineage tracking, and versioning, all while contributing to the CI/CD automation that supports feature deployment across environments.What You’ll DoKey Responsibilities:Feature Store Design & Development: Architect, implement, and maintain a scalable feature store serving offline (batch), online (real-time), and streaming ML use cases.Ecosystem Integration: Build robust integrations between the feature store and ML ecosystem components such as data pipelines, model training workflows, model registry, and model serving infrastructure.Streaming & Real-Time Data Processing: Design and manage streaming pipelines using technologies like Kafka, Kinesis, or Flink to enable low-latency feature generation and real-time inference.Feature Governance & Lineage: Define and enforce governance standards for feature registration, metadata management, lineage tracking, and versioning to ensure data consistency and reusability.Collaboration with ML Teams: Partner with data scientists and ML engineers to streamline feature discovery, definition, and deployment workflows, ensuring reproducibility and efficient model iteration.Data Pipeline Engineering: Build and optimize ingestion and transformation pipelines that handle large-scale data while maintaining accuracy, reliability, and freshness.CI/CD Automation: Implement CI/CD workflows and infrastructure-as-code to automate feature store provisioning and feature promotion across environments (Dev → QA → Prod).Cloud Infrastructure & Security: Collaborate with platform and DevOps teams to ensure secure, scalable, and cost-effective operation of feature store and streaming infrastructure in cloud environments.Monitoring & Observability: Develop monitoring and alerting frameworks to track feature data quality, latency, and freshness across offline, online, and streaming systems.What We’re Looking ForBachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.5+ years of experience in data engineering, platform engineering, or MLOps roles.Strong proficiency in Python and familiarity with distributed data frameworks such as Airflow, Spark or Flink.Hands-on experience with feature store technologies (e.g., Feast, SageMaker Feature Store, Tecton, Databricks Feature Store, or custom implementations).Experience with cloud data warehouse (e.g., snowflake) and transformation framework (e.g. dbt) for data modeling, transformation and feature computation in batch environment. Expertise in streaming data platforms (e.g., Kafka, Kinesis, Flink) and real-time data processing architectures.Experience with cloud environments (AWS preferred) and infrastructure-as-code tools (Terraform, CloudFormation).Strong understanding of CI/CD automation, containerization (Docker, Kubernetes), and API-driven integration patterns.Knowledge of data governance, lineage tracking, and feature lifecycle management best practices.Excellent communication skills, a collaborative mindset, and a strong sense of ownership.Preferred Qualifications (Bonus Points):Experience with Salesforce EcosystemOpen-source contributions or experience in feature store ecosystem developmentExperience with unstructured databases(vector or graph databases) and RAG pipelines Experience with context engineering including structuring data, prompts, and logic for AI systems, managing memory and external knowledge, etc

Locations

  • New York, New York

Salary

Estimated Salary Rangehigh confidence

180,000 - 250,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • proficiency in Pythonintermediate
  • familiarity with distributed data frameworks such as Airflow, Spark or Flinkintermediate
  • experience with feature store technologies (e.g., Feast, SageMaker Feature Store, Tecton, Databricks Feature Store, or custom implementations)intermediate
  • experience with cloud data warehouse (e.g., snowflake)intermediate
  • experience with transformation framework (e.g. dbt)intermediate
  • expertise in streaming data platforms (e.g., Kafka, Kinesis, Flink)intermediate
  • experience with real-time data processing architecturesintermediate
  • experience with cloud environments (AWS preferred)intermediate
  • experience with infrastructure-as-code tools (Terraform, CloudFormation)intermediate
  • understanding of CI/CD automationintermediate
  • understanding of containerization (Docker, Kubernetes)intermediate
  • understanding of API-driven integration patternsintermediate
  • knowledge of data governanceintermediate
  • knowledge of lineage trackingintermediate
  • knowledge of feature lifecycle management best practicesintermediate

Required Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. (degree in master)
  • 5+ years of experience in data engineering, platform engineering, or MLOps roles. (experience, 5 years)
  • Strong proficiency in Python and familiarity with distributed data frameworks such as Airflow, Spark or Flink. (experience)
  • Hands-on experience with feature store technologies (e.g., Feast, SageMaker Feature Store, Tecton, Databricks Feature Store, or custom implementations). (experience)
  • Experience with cloud data warehouse (e.g., snowflake) and transformation framework (e.g. dbt) for data modeling, transformation and feature computation in batch environment. (experience)
  • Expertise in streaming data platforms (e.g., Kafka, Kinesis, Flink) and real-time data processing architectures. (experience)
  • Experience with cloud environments (AWS preferred) and infrastructure-as-code tools (Terraform, CloudFormation). (experience)
  • Strong understanding of CI/CD automation, containerization (Docker, Kubernetes), and API-driven integration patterns. (experience)
  • Knowledge of data governance, lineage tracking, and feature lifecycle management best practices. (experience)
  • Excellent communication skills, a collaborative mindset, and a strong sense of ownership. (experience)

Preferred Qualifications

  • Experience with Salesforce Ecosystem (experience)
  • Open-source contributions or experience in feature store ecosystem develop (experience)

Responsibilities

  • Feature Store Design & Development: Architect, implement, and maintain a scalable feature store serving offline (batch), online (real-time), and streaming ML use cases.
  • Ecosystem Integration: Build robust integrations between the feature store and ML ecosystem components such as data pipelines, model training workflows, model registry, and model serving infrastructure.
  • Streaming & Real-Time Data Processing: Design and manage streaming pipelines using technologies like Kafka, Kinesis, or Flink to enable low-latency feature generation and real-time inference.
  • Feature Governance & Lineage: Define and enforce governance standards for feature registration, metadata management, lineage tracking, and versioning to ensure data consistency and reusability.
  • Collaboration with ML Teams: Partner with data scientists and ML engineers to streamline feature discovery, definition, and deployment workflows, ensuring reproducibility and efficient model iteration.
  • Data Pipeline Engineering: Build and optimize ingestion and transformation pipelines that handle large-scale data while maintaining accuracy, reliability, and freshness.
  • CI/CD Automation: Implement CI/CD workflows and infrastructure-as-code to automate feature store provisioning and feature promotion across environments (Dev → QA → Prod).
  • Cloud Infrastructure & Security: Collaborate with platform and DevOps teams to ensure secure, scalable, and cost-effective operation of feature store and streaming infrastructure in cloud environments.
  • Monitoring & Observability: Develop monitoring and alerting frameworks to track feature data quality, latency, and freshness across offline, online, and streaming systems.

Target Your Resume for "ML Platform Engineer - Feature Store, LMTS" , Salesforce

Get personalized recommendations to optimize your resume specifically for ML Platform Engineer - Feature Store, LMTS. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "ML Platform Engineer - Feature Store, LMTS" , Salesforce

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

Software EngineeringSoftware Engineering

Answer 10 quick questions to check your fit for ML Platform Engineer - Feature Store, LMTS @ Salesforce.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.