Resume and JobRESUME AND JOB
Grammarly logo

Software Engineer, Data Engineering Careers at Grammarly - San Francisco, CA | Apply Now!

Grammarly

Software Engineer, Data Engineering Careers at Grammarly - San Francisco, CA | Apply Now!

full-timePosted: Jan 21, 2026

Job Description

Software Engineer, Data Engineering Careers at Grammarly - San Francisco, CA | Apply Now!

Role Overview

Grammarly, now proudly part of Superhuman's AI productivity platform, is seeking an exceptional Senior Software Engineer, Data Engineering to join our Data Platform team in San Francisco, CA (hybrid). This is a rare opportunity to build world-class data infrastructure that powers AI-driven products serving over 40 million users worldwide.

Our Data Platform team processes 60-70 billion daily events, fueling everything from real-time analytics to machine learning model training. As a Senior Data Engineer, you'll architect scalable systems, lead complex projects, and collaborate with top engineering talent across Superhuman's suite of products—including Grammarly's writing assistance, Coda workspaces, and our proactive AI assistant, Go.

This senior IC role offers strategic influence over platform architecture while providing hands-on technical leadership. You'll work hybrid from our beautiful San Francisco office (NYC candidates also welcome), enjoying the perfect balance of deep focus time and energizing in-person collaboration.

Key Responsibilities

In this high-impact role, you'll drive the evolution of our data platform through hands-on engineering and strategic leadership:

  • Architect large-scale data pipelines and data lakes handling billions of daily events with sub-second latency requirements
  • Design and implement ETL/ELT workflows using Spark, Kafka, and Airflow for both real-time streaming and batch processing
  • Lead cloud infrastructure development on AWS, leveraging Kubernetes, Terraform, and serverless technologies
  • Collaborate with Data Science and ML teams to productionize models serving millions of daily predictions
  • Make critical architectural decisions ensuring 99.99% uptime, cost efficiency, and future scalability
  • Mentor engineers through code reviews, technical design reviews, and knowledge sharing
  • Implement comprehensive data governance, security controls, and compliance frameworks (GDPR, CCPA)
  • Optimize petabyte-scale data storage costs while maintaining query performance SLAs
  • Build observability platforms with advanced monitoring, alerting, and automated incident response
  • Drive technical strategy and roadmap planning for Superhuman's unified data platform
  • Lead system migrations from legacy infrastructure to modern cloud-native architectures
  • Partner cross-functionally with Product, Analytics, and Business Intelligence teams

Qualifications

We're looking for battle-tested data engineers who thrive on complexity and deliver results at scale:

  • 7+ years in data engineering/software engineering with proven big data systems experience
  • Deep expertise building production ETL pipelines processing billions of events daily
  • Mastery of Python, SQL, Spark, Kafka, and modern data orchestration tools
  • Hands-on experience architecting data lakes/warehouses (Snowflake, Redshift, BigQuery)
  • Strong cloud platform experience (AWS preferred) with IaC proficiency
  • Demonstrated ability to lead without authority across engineering teams
  • Experience productionizing ML workflows and collaborating with research teams
  • Expertise in distributed systems design, fault tolerance, and performance optimization
  • Track record mentoring engineers and conducting effective code reviews
  • BS/MS in Computer Science or equivalent practical experience

Salary & Benefits

Salary Range: $220,000 - $320,000 USD base salary (San Francisco, CA), plus equity and bonus.

Our comprehensive compensation package includes:

  • Annual performance bonus (15-25% of base)
  • Significant equity in Superhuman (pre-IPO unicorn)
  • Top-tier medical, dental, vision coverage (90%+ premiums paid)
  • 401(k) with 4% match, immediate vesting
  • Unlimited PTO + 20 days minimum encouraged usage
  • Hybrid SF office with daily catered meals
  • $2,000 annual L&D stipend + conference budget
  • 16 weeks paid parental leave
  • Fitness stipend, mental health support, visa sponsorship

Why Join Grammarly?

Grammarly isn't just another tech company—we're building the future of AI productivity. Here's why thousands of engineers choose us:

  • Mission Impact: Your data platform powers products used by 40M+ people daily
  • Technical Excellence: Work on problems few companies face (60B+ daily events)
  • Ownership Culture: Senior ICs shape strategy without people management
  • Best Talent: Collaborate with ex-FAANG, research PhDs, and serial founders
  • Superhuman Values: Focus on craft, kindness, and ambitious goals
  • Growth Opportunities: Clear paths to principal/staff engineering roles

Read our engineering values and technical blog to see what sets us apart.

How to Apply

Ready to build the data platform powering the world's most successful AI productivity suite?

  1. Submit your resume and a brief note about your most impactful data project
  2. Complete our 45-minute technical assessment (Python/SQL data engineering)
  3. Technical deep-dive with Data Platform engineering leads
  4. Cross-functional collaboration interview
  5. Final leadership chat and offer

We hire fast—most candidates receive offers within 2 weeks. San Francisco or NYC based candidates only (hybrid required). Visa sponsorship available for exceptional candidates.

Apply Now - Join Grammarly's Data Team

Grammarly is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Locations

  • San Francisco, California, United States
  • New York City, New York, United States

Salary

220,000 - 320,000 USD / yearly

Estimated Salary Rangehigh confidence

231,000 - 352,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Data Engineeringintermediate
  • ETL Pipelinesintermediate
  • Apache Sparkintermediate
  • Apache Kafkaintermediate
  • AWS Cloud Infrastructureintermediate
  • Data Lakesintermediate
  • Real-time Data Processingintermediate
  • Scalable System Architectureintermediate
  • Python Programmingintermediate
  • SQL Optimizationintermediate
  • Big Data Technologiesintermediate
  • Airflow Orchestrationintermediate
  • Snowflake Data Warehousingintermediate
  • Kubernetes Orchestrationintermediate
  • Machine Learning Pipelinesintermediate
  • Data Security & Complianceintermediate
  • Infrastructure as Codeintermediate
  • Batch Processing Systemsintermediate
  • Data Governanceintermediate
  • Terraform IaCintermediate

Required Qualifications

  • 7+ years of professional experience in data engineering or software engineering roles (experience)
  • Proven track record building and scaling data pipelines handling billions of events daily (experience)
  • Deep expertise in ETL/ELT processes, real-time streaming, and batch data processing (experience)
  • Strong proficiency in Python, SQL, and at least one big data framework (Spark, Flink, Kafka) (experience)
  • Experience designing and implementing scalable data lakes and data warehouses (Snowflake, Redshift, BigQuery) (experience)
  • Hands-on experience with cloud platforms (AWS, GCP, Azure) and container orchestration (Kubernetes) (experience)
  • Demonstrated ability to architect complex distributed systems with high availability and fault tolerance (experience)
  • Experience collaborating with data science, ML, and analytics teams on production pipelines (experience)
  • Strong understanding of data security, compliance (GDPR, CCPA), and access controls (experience)
  • Bachelor's or Master's degree in Computer Science, Engineering, or related technical field (experience)
  • Experience mentoring junior engineers and leading technical initiatives (experience)
  • Excellent problem-solving skills and ability to make high-impact architectural decisions (experience)

Responsibilities

  • Architect and lead development of large-scale data pipelines processing 60-70B daily events
  • Design scalable data lakes and warehouses supporting real-time and batch analytics workloads
  • Implement robust ETL/ELT pipelines using Apache Spark, Kafka, and Airflow
  • Optimize data infrastructure for cost-efficiency, performance, and reliability at massive scale
  • Collaborate with ML/Data Science teams to productionize machine learning models and features
  • Make strategic architectural decisions on technology stack, system design, and platform evolution
  • Build and maintain cloud infrastructure using IaC tools (Terraform, CloudFormation)
  • Ensure data security, governance, and compliance across all data platforms
  • Mentor junior engineers and conduct code reviews for data engineering best practices
  • Monitor and troubleshoot production systems handling petabyte-scale data volumes
  • Partner with product and analytics teams to define data requirements and SLAs
  • Contribute to technical strategy and roadmap for Superhuman's data platform
  • Develop observability and monitoring solutions for data pipeline health
  • Lead migrations and upgrades of legacy data systems to modern cloud-native architectures

Benefits

  • general: Competitive salary range $220K-$320K based on experience and location
  • general: Annual performance bonus and equity in a fast-growing AI company
  • general: Comprehensive medical, dental, and vision insurance coverage
  • general: 401(k) matching program with immediate vesting
  • general: Unlimited PTO with encouraged 20+ days annual usage
  • general: Hybrid work model in premium San Francisco or NYC offices
  • general: Daily catered lunch, snacks, and beverages in office
  • general: $2,000 annual learning & development stipend
  • general: Fitness reimbursement up to $100/month
  • general: Parental leave: 16 weeks fully paid for primary caregivers
  • general: Mental health support through dedicated counseling services
  • general: Company-sponsored visa sponsorship for qualified candidates
  • general: Regular team offsites and company-wide retreats
  • general: Generous employee referral bonus program

Target Your Resume for "Software Engineer, Data Engineering Careers at Grammarly - San Francisco, CA | Apply Now!" , Grammarly

Get personalized recommendations to optimize your resume specifically for Software Engineer, Data Engineering Careers at Grammarly - San Francisco, CA | Apply Now!. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Software Engineer, Data Engineering Careers at Grammarly - San Francisco, CA | Apply Now!" , Grammarly

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

software engineer data engineering jobs san franciscosenior data engineer grammarly careersdata platform engineer superhumanetl engineer jobs bay areaapache spark engineer san franciscodata pipeline architect californiaaws data engineer hybrid jobsreal-time data processing engineerbig data engineer grammarlykafka spark python data jobssenior data engineering salary sfdata lake architect careersml data pipeline engineerscalable data systems engineercloud data infrastructure jobsdata engineering manager ic sfpetabyte scale data engineergrammarly engineering careerssuperhuman data platform jobssenior ic data engineer bay areahigh scale data engineering rolesai data platform engineerEngineering

Answer 10 quick questions to check your fit for Software Engineer, Data Engineering Careers at Grammarly - San Francisco, CA | Apply Now! @ Grammarly.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

Grammarly logo

Software Engineer, Data Engineering Careers at Grammarly - San Francisco, CA | Apply Now!

Grammarly

Software Engineer, Data Engineering Careers at Grammarly - San Francisco, CA | Apply Now!

full-timePosted: Jan 21, 2026

Job Description

Software Engineer, Data Engineering Careers at Grammarly - San Francisco, CA | Apply Now!

Role Overview

Grammarly, now proudly part of Superhuman's AI productivity platform, is seeking an exceptional Senior Software Engineer, Data Engineering to join our Data Platform team in San Francisco, CA (hybrid). This is a rare opportunity to build world-class data infrastructure that powers AI-driven products serving over 40 million users worldwide.

Our Data Platform team processes 60-70 billion daily events, fueling everything from real-time analytics to machine learning model training. As a Senior Data Engineer, you'll architect scalable systems, lead complex projects, and collaborate with top engineering talent across Superhuman's suite of products—including Grammarly's writing assistance, Coda workspaces, and our proactive AI assistant, Go.

This senior IC role offers strategic influence over platform architecture while providing hands-on technical leadership. You'll work hybrid from our beautiful San Francisco office (NYC candidates also welcome), enjoying the perfect balance of deep focus time and energizing in-person collaboration.

Key Responsibilities

In this high-impact role, you'll drive the evolution of our data platform through hands-on engineering and strategic leadership:

  • Architect large-scale data pipelines and data lakes handling billions of daily events with sub-second latency requirements
  • Design and implement ETL/ELT workflows using Spark, Kafka, and Airflow for both real-time streaming and batch processing
  • Lead cloud infrastructure development on AWS, leveraging Kubernetes, Terraform, and serverless technologies
  • Collaborate with Data Science and ML teams to productionize models serving millions of daily predictions
  • Make critical architectural decisions ensuring 99.99% uptime, cost efficiency, and future scalability
  • Mentor engineers through code reviews, technical design reviews, and knowledge sharing
  • Implement comprehensive data governance, security controls, and compliance frameworks (GDPR, CCPA)
  • Optimize petabyte-scale data storage costs while maintaining query performance SLAs
  • Build observability platforms with advanced monitoring, alerting, and automated incident response
  • Drive technical strategy and roadmap planning for Superhuman's unified data platform
  • Lead system migrations from legacy infrastructure to modern cloud-native architectures
  • Partner cross-functionally with Product, Analytics, and Business Intelligence teams

Qualifications

We're looking for battle-tested data engineers who thrive on complexity and deliver results at scale:

  • 7+ years in data engineering/software engineering with proven big data systems experience
  • Deep expertise building production ETL pipelines processing billions of events daily
  • Mastery of Python, SQL, Spark, Kafka, and modern data orchestration tools
  • Hands-on experience architecting data lakes/warehouses (Snowflake, Redshift, BigQuery)
  • Strong cloud platform experience (AWS preferred) with IaC proficiency
  • Demonstrated ability to lead without authority across engineering teams
  • Experience productionizing ML workflows and collaborating with research teams
  • Expertise in distributed systems design, fault tolerance, and performance optimization
  • Track record mentoring engineers and conducting effective code reviews
  • BS/MS in Computer Science or equivalent practical experience

Salary & Benefits

Salary Range: $220,000 - $320,000 USD base salary (San Francisco, CA), plus equity and bonus.

Our comprehensive compensation package includes:

  • Annual performance bonus (15-25% of base)
  • Significant equity in Superhuman (pre-IPO unicorn)
  • Top-tier medical, dental, vision coverage (90%+ premiums paid)
  • 401(k) with 4% match, immediate vesting
  • Unlimited PTO + 20 days minimum encouraged usage
  • Hybrid SF office with daily catered meals
  • $2,000 annual L&D stipend + conference budget
  • 16 weeks paid parental leave
  • Fitness stipend, mental health support, visa sponsorship

Why Join Grammarly?

Grammarly isn't just another tech company—we're building the future of AI productivity. Here's why thousands of engineers choose us:

  • Mission Impact: Your data platform powers products used by 40M+ people daily
  • Technical Excellence: Work on problems few companies face (60B+ daily events)
  • Ownership Culture: Senior ICs shape strategy without people management
  • Best Talent: Collaborate with ex-FAANG, research PhDs, and serial founders
  • Superhuman Values: Focus on craft, kindness, and ambitious goals
  • Growth Opportunities: Clear paths to principal/staff engineering roles

Read our engineering values and technical blog to see what sets us apart.

How to Apply

Ready to build the data platform powering the world's most successful AI productivity suite?

  1. Submit your resume and a brief note about your most impactful data project
  2. Complete our 45-minute technical assessment (Python/SQL data engineering)
  3. Technical deep-dive with Data Platform engineering leads
  4. Cross-functional collaboration interview
  5. Final leadership chat and offer

We hire fast—most candidates receive offers within 2 weeks. San Francisco or NYC based candidates only (hybrid required). Visa sponsorship available for exceptional candidates.

Apply Now - Join Grammarly's Data Team

Grammarly is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Locations

  • San Francisco, California, United States
  • New York City, New York, United States

Salary

220,000 - 320,000 USD / yearly

Estimated Salary Rangehigh confidence

231,000 - 352,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Data Engineeringintermediate
  • ETL Pipelinesintermediate
  • Apache Sparkintermediate
  • Apache Kafkaintermediate
  • AWS Cloud Infrastructureintermediate
  • Data Lakesintermediate
  • Real-time Data Processingintermediate
  • Scalable System Architectureintermediate
  • Python Programmingintermediate
  • SQL Optimizationintermediate
  • Big Data Technologiesintermediate
  • Airflow Orchestrationintermediate
  • Snowflake Data Warehousingintermediate
  • Kubernetes Orchestrationintermediate
  • Machine Learning Pipelinesintermediate
  • Data Security & Complianceintermediate
  • Infrastructure as Codeintermediate
  • Batch Processing Systemsintermediate
  • Data Governanceintermediate
  • Terraform IaCintermediate

Required Qualifications

  • 7+ years of professional experience in data engineering or software engineering roles (experience)
  • Proven track record building and scaling data pipelines handling billions of events daily (experience)
  • Deep expertise in ETL/ELT processes, real-time streaming, and batch data processing (experience)
  • Strong proficiency in Python, SQL, and at least one big data framework (Spark, Flink, Kafka) (experience)
  • Experience designing and implementing scalable data lakes and data warehouses (Snowflake, Redshift, BigQuery) (experience)
  • Hands-on experience with cloud platforms (AWS, GCP, Azure) and container orchestration (Kubernetes) (experience)
  • Demonstrated ability to architect complex distributed systems with high availability and fault tolerance (experience)
  • Experience collaborating with data science, ML, and analytics teams on production pipelines (experience)
  • Strong understanding of data security, compliance (GDPR, CCPA), and access controls (experience)
  • Bachelor's or Master's degree in Computer Science, Engineering, or related technical field (experience)
  • Experience mentoring junior engineers and leading technical initiatives (experience)
  • Excellent problem-solving skills and ability to make high-impact architectural decisions (experience)

Responsibilities

  • Architect and lead development of large-scale data pipelines processing 60-70B daily events
  • Design scalable data lakes and warehouses supporting real-time and batch analytics workloads
  • Implement robust ETL/ELT pipelines using Apache Spark, Kafka, and Airflow
  • Optimize data infrastructure for cost-efficiency, performance, and reliability at massive scale
  • Collaborate with ML/Data Science teams to productionize machine learning models and features
  • Make strategic architectural decisions on technology stack, system design, and platform evolution
  • Build and maintain cloud infrastructure using IaC tools (Terraform, CloudFormation)
  • Ensure data security, governance, and compliance across all data platforms
  • Mentor junior engineers and conduct code reviews for data engineering best practices
  • Monitor and troubleshoot production systems handling petabyte-scale data volumes
  • Partner with product and analytics teams to define data requirements and SLAs
  • Contribute to technical strategy and roadmap for Superhuman's data platform
  • Develop observability and monitoring solutions for data pipeline health
  • Lead migrations and upgrades of legacy data systems to modern cloud-native architectures

Benefits

  • general: Competitive salary range $220K-$320K based on experience and location
  • general: Annual performance bonus and equity in a fast-growing AI company
  • general: Comprehensive medical, dental, and vision insurance coverage
  • general: 401(k) matching program with immediate vesting
  • general: Unlimited PTO with encouraged 20+ days annual usage
  • general: Hybrid work model in premium San Francisco or NYC offices
  • general: Daily catered lunch, snacks, and beverages in office
  • general: $2,000 annual learning & development stipend
  • general: Fitness reimbursement up to $100/month
  • general: Parental leave: 16 weeks fully paid for primary caregivers
  • general: Mental health support through dedicated counseling services
  • general: Company-sponsored visa sponsorship for qualified candidates
  • general: Regular team offsites and company-wide retreats
  • general: Generous employee referral bonus program

Target Your Resume for "Software Engineer, Data Engineering Careers at Grammarly - San Francisco, CA | Apply Now!" , Grammarly

Get personalized recommendations to optimize your resume specifically for Software Engineer, Data Engineering Careers at Grammarly - San Francisco, CA | Apply Now!. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Software Engineer, Data Engineering Careers at Grammarly - San Francisco, CA | Apply Now!" , Grammarly

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

software engineer data engineering jobs san franciscosenior data engineer grammarly careersdata platform engineer superhumanetl engineer jobs bay areaapache spark engineer san franciscodata pipeline architect californiaaws data engineer hybrid jobsreal-time data processing engineerbig data engineer grammarlykafka spark python data jobssenior data engineering salary sfdata lake architect careersml data pipeline engineerscalable data systems engineercloud data infrastructure jobsdata engineering manager ic sfpetabyte scale data engineergrammarly engineering careerssuperhuman data platform jobssenior ic data engineer bay areahigh scale data engineering rolesai data platform engineerEngineering

Answer 10 quick questions to check your fit for Software Engineer, Data Engineering Careers at Grammarly - San Francisco, CA | Apply Now! @ Grammarly.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.