Resume and JobRESUME AND JOB
Grammarly logo

Software Engineer, Data Engineering Careers at Grammarly - New York City, NY | Apply Now!

Grammarly

Software Engineer, Data Engineering Careers at Grammarly - New York City, NY | Apply Now!

full-timePosted: Jan 21, 2026

Job Description

Software Engineer, Data Engineering at Grammarly - New York City Hybrid

Grammarly, now proudly part of Superhuman's AI productivity platform, is seeking an exceptional Senior Software Engineer, Data Engineering to join our world-class Data Platform team in New York City (hybrid). This is your chance to build scalable systems processing 60-70 billion daily events that power AI-driven products used by 40 million people worldwide.

Role Overview

In this senior individual contributor role, you'll architect the future of Grammarly's data platform, ensuring it scales effortlessly with our explosive growth. Our hybrid model in NYC or San Francisco offers the perfect balance of deep focus time and energizing in-person collaboration that sparks innovation and builds unbreakable team trust.

Superhuman's suite—including Grammarly's writing AI, Coda's workspaces, and our new Go AI assistant—demands a bulletproof data foundation. You'll design real-time ETL pipelines, data lakes, and backend services that fuel machine learning models, product analytics, and executive decision-making. This isn't just engineering; it's strategic platform leadership at the intersection of AI, big data, and cloud-native architecture.

With full ownership from strategy to production deployment, you'll influence Grammarly's technical roadmap while mentoring the next generation of data engineers. Join us to eliminate busywork for millions while mastering the world's most complex data challenges.

Key Responsibilities

  1. Lead architecture and implementation of data pipelines processing billions of daily events with sub-second latency guarantees.
  2. Design secure, scalable data lakes enabling real-time analytics and ML feature stores across petabyte-scale datasets.
  3. Drive high-level technical decisions on streaming platforms (Kafka/Flink), batch processing (Spark), and orchestration (Airflow).
  4. Collaborate cross-functionally with ML teams to productionize models requiring fresh training data pipelines.
  5. Optimize cloud costs while delivering 99.99% uptime through intelligent autoscaling and resource management.
  6. Mentor engineers through code reviews, technical design docs, and 1:1 coaching sessions.
  7. Build comprehensive observability stacks with Prometheus, Grafana, and custom alerting for proactive issue resolution.
  8. Lead migrations to event-driven architectures replacing legacy batch systems.
  9. Implement data governance frameworks ensuring GDPR/CCPA compliance at global scale.
  10. Contribute to open-source projects and Grammarly's technical blog sharing hard-won battle-tested patterns.
  11. Partner with product managers to translate business KPIs into scalable data infrastructure requirements.
  12. Conduct post-mortems and chaos engineering to harden systems against rare-but-catastrophic failures.
  13. Represent data platform in executive technical reviews shaping 3-year infrastructure roadmap.

Qualifications

Must-Have Technical Expertise:

  • 7+ years building production data systems at web-scale companies
  • Deep Spark/Kafka expertise with real-world streaming pipeline ownership
  • Cloud-native architecture experience (AWS preferred) with Terraform/Kubernetes
  • Advanced SQL + Python/Scala for complex data transformations

Leadership & Collaboration:

  • Proven mentoring track record elevating team velocity
  • Cross-functional partnership experience with ML/analytics stakeholders
  • Excellent communication translating technical complexity to business impact

Problem-Solving Mindset:

  • Thrives architecting under uncertainty with incomplete requirements
  • Obsessed with production excellence and operational reliability
  • Curious lifelong learner staying ahead of data engineering evolution

Salary & Benefits

Competitive Compensation: $185,000 - $245,000 base + bonus + equity (total comp $250K-$350K+)

Exceptional Benefits Package:

  • Unlimited PTO (20+ days encouraged annually)
  • Hybrid NYC/SF model with collaboration offsites
  • Top-tier medical/dental/vision + 401k match
  • 16 weeks parental leave + mental health support
  • $2K annual learning stipend + home office setup
  • Superhuman stock options in rocketship-growth company

Why Join Grammarly?

Grammarly isn't just a writing tool—it's the backbone of Superhuman's 40M-user AI empire. Our Data Platform team powers everything from real-time personalization to billion-parameter LLM training pipelines. You'll work with the smartest engineers solving problems no one else touches.

Culture That Delivers: We hire mission-aligned builders who ship fast and learn faster. No bureaucracy, maximum ownership. Weekly demos, quarterly hackathons, and direct CEO access keep us laser-focused on impact.

Technical Excellence: Latest cloud tech stack, generous conference budget, internal tech talks from industry luminaries. Our engineering blog showcases real war stories from scaling to 70B daily events.

Impact at Scale: Your pipelines don't just move data—they unlock AI superpowers for 50K enterprises and 3K universities worldwide.

How to Apply

Ready to architect the data platform powering AI's future? Submit your resume + GitHub/portfolio + answers to:

  1. Describe your most challenging production data pipeline project and key learnings.
  2. Walk through a system design for 100TB/day real-time analytics platform.
  3. Why are you passionate about data engineering at Grammarly's scale?

Our process: 30-min recruiter call → 45-min hiring manager → technical deep-dive → team pair programming → final leadership chat. Most candidates hear back within 48 hours.

Grammarly is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Locations

  • New York City, New York, United States
  • San Francisco, California, United States

Salary

Estimated Salary Rangehigh confidence

194,250 - 269,500 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Data Engineeringintermediate
  • ETL Pipelinesintermediate
  • Apache Sparkintermediate
  • Apache Kafkaintermediate
  • AWS Cloud Infrastructureintermediate
  • Data Lakesintermediate
  • Real-time Data Processingintermediate
  • Python Programmingintermediate
  • Scalaintermediate
  • SQL Optimizationintermediate
  • Big Data Architectureintermediate
  • Airflow Orchestrationintermediate
  • Snowflake Data Warehousingintermediate
  • Kubernetesintermediate
  • Terraform IaCintermediate
  • Data Security & Complianceintermediate
  • Machine Learning Pipelinesintermediate
  • Batch Processingintermediate
  • Stream Processingintermediate

Required Qualifications

  • Bachelor's or Master's degree in Computer Science, Engineering, or related field (experience)
  • 7+ years of professional experience in data engineering or software engineering (experience)
  • Proven track record building scalable data pipelines handling billions of events daily (experience)
  • Deep expertise in ETL/ELT processes and real-time data streaming with Kafka or similar (experience)
  • Strong proficiency in Python, Scala, or Java for data processing applications (experience)
  • Hands-on experience with cloud platforms like AWS, GCP, or Azure (experience)
  • Expertise in big data technologies such as Spark, Flink, or Hadoop ecosystems (experience)
  • Experience designing and maintaining data lakes and warehouses (Snowflake, Redshift, BigQuery) (experience)
  • Solid understanding of data modeling, schema design, and data governance (experience)
  • Demonstrated ability to architect secure, reliable, and cost-efficient data systems (experience)
  • Experience collaborating with data science, ML, and analytics teams (experience)
  • Strong mentoring skills and ability to lead technical initiatives without direct authority (experience)

Responsibilities

  • Architect and lead development of large-scale data pipelines and data lakes handling 60-70B daily events
  • Design scalable solutions for real-time and batch data processing ensuring high availability and low latency
  • Make strategic architectural decisions on system design, technology stack, and platform evolution
  • Collaborate with cross-functional teams including backend, analytics, data science, and ML engineers
  • Implement robust data security measures, compliance standards, and access controls
  • Optimize data infrastructure for cost-efficiency while maintaining performance SLAs
  • Build and maintain cloud-native infrastructure using Terraform, Kubernetes, and managed services
  • Develop monitoring, alerting, and observability systems for data platform reliability
  • Mentor junior engineers and conduct code reviews to elevate team technical excellence
  • Drive data platform strategy and roadmap aligned with business objectives
  • Integrate data pipelines with product features and ML models for real-time insights
  • Troubleshoot and resolve complex production issues in high-scale distributed systems
  • Contribute to technical blog posts and represent engineering at industry events
  • Lead migrations from legacy systems to modern cloud-native data architectures

Benefits

  • general: Unlimited PTO with encouraged 20+ days annual usage
  • general: Hybrid work model combining focus time and in-person collaboration
  • general: Comprehensive medical, dental, and vision insurance coverage
  • general: 401(k) matching program up to 4% of base salary
  • general: Annual performance bonus potential up to 20%
  • general: Stock options in high-growth Superhuman (parent company)
  • general: Professional development stipend of $2,000 annually
  • general: Mental health support through dedicated counseling services
  • general: Parental leave: 16 weeks fully paid for primary caregivers
  • general: Home office stipend of $1,500 plus monthly remote work allowance
  • general: Weekly team lunches and quarterly company offsites
  • general: Fitness reimbursement up to $100/month
  • general: Learning platform subscriptions (Pluralsight, O'Reilly, etc.)
  • general: Volunteer time off with 40 hours paid annually

Target Your Resume for "Software Engineer, Data Engineering Careers at Grammarly - New York City, NY | Apply Now!" , Grammarly

Get personalized recommendations to optimize your resume specifically for Software Engineer, Data Engineering Careers at Grammarly - New York City, NY | Apply Now!. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Software Engineer, Data Engineering Careers at Grammarly - New York City, NY | Apply Now!" , Grammarly

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

software engineer data engineering jobs NYCsenior data engineer Grammarly careersdata platform engineer New York hybridETL pipeline engineer Superhumanbig data architect jobs San FranciscoApache Spark engineer careersKafka streaming engineer GrammarlyAWS data engineer New York Cityscalable data pipeline jobsreal-time data processing careersdata lake architect hybrid jobssenior IC data engineeringmachine learning data pipelines NYCcloud data infrastructure engineerSnowflake data engineer careersKubernetes data platform jobspetabyte scale data engineeringGrammarly engineering blog careersSuperhuman data team jobsbillion events daily data jobssenior data engineer salary NYChybrid data engineering New YorkEngineering

Answer 10 quick questions to check your fit for Software Engineer, Data Engineering Careers at Grammarly - New York City, NY | Apply Now! @ Grammarly.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

Grammarly logo

Software Engineer, Data Engineering Careers at Grammarly - New York City, NY | Apply Now!

Grammarly

Software Engineer, Data Engineering Careers at Grammarly - New York City, NY | Apply Now!

full-timePosted: Jan 21, 2026

Job Description

Software Engineer, Data Engineering at Grammarly - New York City Hybrid

Grammarly, now proudly part of Superhuman's AI productivity platform, is seeking an exceptional Senior Software Engineer, Data Engineering to join our world-class Data Platform team in New York City (hybrid). This is your chance to build scalable systems processing 60-70 billion daily events that power AI-driven products used by 40 million people worldwide.

Role Overview

In this senior individual contributor role, you'll architect the future of Grammarly's data platform, ensuring it scales effortlessly with our explosive growth. Our hybrid model in NYC or San Francisco offers the perfect balance of deep focus time and energizing in-person collaboration that sparks innovation and builds unbreakable team trust.

Superhuman's suite—including Grammarly's writing AI, Coda's workspaces, and our new Go AI assistant—demands a bulletproof data foundation. You'll design real-time ETL pipelines, data lakes, and backend services that fuel machine learning models, product analytics, and executive decision-making. This isn't just engineering; it's strategic platform leadership at the intersection of AI, big data, and cloud-native architecture.

With full ownership from strategy to production deployment, you'll influence Grammarly's technical roadmap while mentoring the next generation of data engineers. Join us to eliminate busywork for millions while mastering the world's most complex data challenges.

Key Responsibilities

  1. Lead architecture and implementation of data pipelines processing billions of daily events with sub-second latency guarantees.
  2. Design secure, scalable data lakes enabling real-time analytics and ML feature stores across petabyte-scale datasets.
  3. Drive high-level technical decisions on streaming platforms (Kafka/Flink), batch processing (Spark), and orchestration (Airflow).
  4. Collaborate cross-functionally with ML teams to productionize models requiring fresh training data pipelines.
  5. Optimize cloud costs while delivering 99.99% uptime through intelligent autoscaling and resource management.
  6. Mentor engineers through code reviews, technical design docs, and 1:1 coaching sessions.
  7. Build comprehensive observability stacks with Prometheus, Grafana, and custom alerting for proactive issue resolution.
  8. Lead migrations to event-driven architectures replacing legacy batch systems.
  9. Implement data governance frameworks ensuring GDPR/CCPA compliance at global scale.
  10. Contribute to open-source projects and Grammarly's technical blog sharing hard-won battle-tested patterns.
  11. Partner with product managers to translate business KPIs into scalable data infrastructure requirements.
  12. Conduct post-mortems and chaos engineering to harden systems against rare-but-catastrophic failures.
  13. Represent data platform in executive technical reviews shaping 3-year infrastructure roadmap.

Qualifications

Must-Have Technical Expertise:

  • 7+ years building production data systems at web-scale companies
  • Deep Spark/Kafka expertise with real-world streaming pipeline ownership
  • Cloud-native architecture experience (AWS preferred) with Terraform/Kubernetes
  • Advanced SQL + Python/Scala for complex data transformations

Leadership & Collaboration:

  • Proven mentoring track record elevating team velocity
  • Cross-functional partnership experience with ML/analytics stakeholders
  • Excellent communication translating technical complexity to business impact

Problem-Solving Mindset:

  • Thrives architecting under uncertainty with incomplete requirements
  • Obsessed with production excellence and operational reliability
  • Curious lifelong learner staying ahead of data engineering evolution

Salary & Benefits

Competitive Compensation: $185,000 - $245,000 base + bonus + equity (total comp $250K-$350K+)

Exceptional Benefits Package:

  • Unlimited PTO (20+ days encouraged annually)
  • Hybrid NYC/SF model with collaboration offsites
  • Top-tier medical/dental/vision + 401k match
  • 16 weeks parental leave + mental health support
  • $2K annual learning stipend + home office setup
  • Superhuman stock options in rocketship-growth company

Why Join Grammarly?

Grammarly isn't just a writing tool—it's the backbone of Superhuman's 40M-user AI empire. Our Data Platform team powers everything from real-time personalization to billion-parameter LLM training pipelines. You'll work with the smartest engineers solving problems no one else touches.

Culture That Delivers: We hire mission-aligned builders who ship fast and learn faster. No bureaucracy, maximum ownership. Weekly demos, quarterly hackathons, and direct CEO access keep us laser-focused on impact.

Technical Excellence: Latest cloud tech stack, generous conference budget, internal tech talks from industry luminaries. Our engineering blog showcases real war stories from scaling to 70B daily events.

Impact at Scale: Your pipelines don't just move data—they unlock AI superpowers for 50K enterprises and 3K universities worldwide.

How to Apply

Ready to architect the data platform powering AI's future? Submit your resume + GitHub/portfolio + answers to:

  1. Describe your most challenging production data pipeline project and key learnings.
  2. Walk through a system design for 100TB/day real-time analytics platform.
  3. Why are you passionate about data engineering at Grammarly's scale?

Our process: 30-min recruiter call → 45-min hiring manager → technical deep-dive → team pair programming → final leadership chat. Most candidates hear back within 48 hours.

Grammarly is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Locations

  • New York City, New York, United States
  • San Francisco, California, United States

Salary

Estimated Salary Rangehigh confidence

194,250 - 269,500 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Skills Required

  • Data Engineeringintermediate
  • ETL Pipelinesintermediate
  • Apache Sparkintermediate
  • Apache Kafkaintermediate
  • AWS Cloud Infrastructureintermediate
  • Data Lakesintermediate
  • Real-time Data Processingintermediate
  • Python Programmingintermediate
  • Scalaintermediate
  • SQL Optimizationintermediate
  • Big Data Architectureintermediate
  • Airflow Orchestrationintermediate
  • Snowflake Data Warehousingintermediate
  • Kubernetesintermediate
  • Terraform IaCintermediate
  • Data Security & Complianceintermediate
  • Machine Learning Pipelinesintermediate
  • Batch Processingintermediate
  • Stream Processingintermediate

Required Qualifications

  • Bachelor's or Master's degree in Computer Science, Engineering, or related field (experience)
  • 7+ years of professional experience in data engineering or software engineering (experience)
  • Proven track record building scalable data pipelines handling billions of events daily (experience)
  • Deep expertise in ETL/ELT processes and real-time data streaming with Kafka or similar (experience)
  • Strong proficiency in Python, Scala, or Java for data processing applications (experience)
  • Hands-on experience with cloud platforms like AWS, GCP, or Azure (experience)
  • Expertise in big data technologies such as Spark, Flink, or Hadoop ecosystems (experience)
  • Experience designing and maintaining data lakes and warehouses (Snowflake, Redshift, BigQuery) (experience)
  • Solid understanding of data modeling, schema design, and data governance (experience)
  • Demonstrated ability to architect secure, reliable, and cost-efficient data systems (experience)
  • Experience collaborating with data science, ML, and analytics teams (experience)
  • Strong mentoring skills and ability to lead technical initiatives without direct authority (experience)

Responsibilities

  • Architect and lead development of large-scale data pipelines and data lakes handling 60-70B daily events
  • Design scalable solutions for real-time and batch data processing ensuring high availability and low latency
  • Make strategic architectural decisions on system design, technology stack, and platform evolution
  • Collaborate with cross-functional teams including backend, analytics, data science, and ML engineers
  • Implement robust data security measures, compliance standards, and access controls
  • Optimize data infrastructure for cost-efficiency while maintaining performance SLAs
  • Build and maintain cloud-native infrastructure using Terraform, Kubernetes, and managed services
  • Develop monitoring, alerting, and observability systems for data platform reliability
  • Mentor junior engineers and conduct code reviews to elevate team technical excellence
  • Drive data platform strategy and roadmap aligned with business objectives
  • Integrate data pipelines with product features and ML models for real-time insights
  • Troubleshoot and resolve complex production issues in high-scale distributed systems
  • Contribute to technical blog posts and represent engineering at industry events
  • Lead migrations from legacy systems to modern cloud-native data architectures

Benefits

  • general: Unlimited PTO with encouraged 20+ days annual usage
  • general: Hybrid work model combining focus time and in-person collaboration
  • general: Comprehensive medical, dental, and vision insurance coverage
  • general: 401(k) matching program up to 4% of base salary
  • general: Annual performance bonus potential up to 20%
  • general: Stock options in high-growth Superhuman (parent company)
  • general: Professional development stipend of $2,000 annually
  • general: Mental health support through dedicated counseling services
  • general: Parental leave: 16 weeks fully paid for primary caregivers
  • general: Home office stipend of $1,500 plus monthly remote work allowance
  • general: Weekly team lunches and quarterly company offsites
  • general: Fitness reimbursement up to $100/month
  • general: Learning platform subscriptions (Pluralsight, O'Reilly, etc.)
  • general: Volunteer time off with 40 hours paid annually

Target Your Resume for "Software Engineer, Data Engineering Careers at Grammarly - New York City, NY | Apply Now!" , Grammarly

Get personalized recommendations to optimize your resume specifically for Software Engineer, Data Engineering Careers at Grammarly - New York City, NY | Apply Now!. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Software Engineer, Data Engineering Careers at Grammarly - New York City, NY | Apply Now!" , Grammarly

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Tags & Categories

software engineer data engineering jobs NYCsenior data engineer Grammarly careersdata platform engineer New York hybridETL pipeline engineer Superhumanbig data architect jobs San FranciscoApache Spark engineer careersKafka streaming engineer GrammarlyAWS data engineer New York Cityscalable data pipeline jobsreal-time data processing careersdata lake architect hybrid jobssenior IC data engineeringmachine learning data pipelines NYCcloud data infrastructure engineerSnowflake data engineer careersKubernetes data platform jobspetabyte scale data engineeringGrammarly engineering blog careersSuperhuman data team jobsbillion events daily data jobssenior data engineer salary NYChybrid data engineering New YorkEngineering

Answer 10 quick questions to check your fit for Software Engineer, Data Engineering Careers at Grammarly - New York City, NY | Apply Now! @ Grammarly.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.