Resume and JobRESUME AND JOB
Microsoft logo

Principal Software Engineer - Azure AI Inferencing

Microsoft

Software and Technology Jobs

Principal Software Engineer - Azure AI Inferencing

full-timePosted: Oct 9, 2025

Job Description

Microsoft Azure AI Inference platform is the next generation cloud business positioned to address the growing AI market. We are on the verge of an AI revolution and have a tremendous opportunity to empower our partners and customers to harness the full power of AI responsibly. We offer a fully managed AI Inference platform to accelerate the research, development, and operations of AI powered intelligent solutions at scale. This team owns the hosting, optimization, and scaling the inference stack for all the Azure AI Foundary models including the latest and greatest from OpenAI, Grok, DeepSeek, and other OSS models.   Do you want to join a team entrusted with serving all internal and external ML workloads, solve real world inference problems for state-of-the-art large language (LLM) and multi-modal Gen AI models from OpenAI and other model providers? We are already serving billions of inferences per day on the most cutting-edge AI scenarios across the industry. You will be joining the CoreAI Inferencing team, influencing the overall product, driving new features and platform capabilities from preview to General Availability, and many exciting problems on the intersection of AI and Cloud.    We’re looking for a Principal Software Engineer - Azure AI Inferencing to drive the design, optimization, and scaling of our inference systems. In this role, you’ll lead engineering efforts to ensure our largest models run with exceptional efficiency in high-throughput, low-latency environments. You will get to work on and influence multiple levels of the AI Inference data plane stack.  We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served.   Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Locations

  • Redmond, Washington, United States, Redmond, Washington, United States

Salary

Estimated Salary Rangehigh confidence

220,000 - 320,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Required Qualifications

  • Bachelor’s degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, or Golang OR equivalent experience. (degree)
  • OR equivalent experience. (degree)
  • 4+ years’ practical experience working on high scale, reliable online systems (degree)
  • Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. (degree)
  • Technical background and foundation in software engineering principles, distributed computing and architecture (degree)
  • Experience in real-time online services with low latency and high throughput (degree)
  • Experience working with L7 network proxies and gateways (degree)
  • Knowledge in Network architecture and concepts (HTTP and TCP Protocols, Authentication and Sessions etc) (degree)
  • Knowledge and experience in OSS, Docker, Kubernetes, C++, Golang, or equivalent programming languages (degree)
  • Cross-team collaboration skills and the desire to collaborate in a team of researchers and developers (degree)
  • Ability to independently lead projects (degree)

Responsibilities

  • Lead the design and implementation of core inference infrastructure for serving frontier AI models in production.
  • Identify and drive improvements to end-to-end inference performance and efficiency of OpenAI and other state-of-the-art LLMs.
  • Lead the design and implementation of efficient load scheduling and balancing strategies, by leveraging key insights and features of the model and workload.
  • Scale the platform to support the growing inferencing demand and maintain high availability.
  • Deliver critical capabilities required to serve the latest and greatest Gen AI models such as GPT5, Realtime audio, Sora, and enable fast time to market for them.
  • Drive generic features to cater to the needs of customers such as GitHub, M365, Microsoft AI and third-party companies.
  • Collaborate with our partners both internal and external.
  • Mentor engineers on distributed inference best practices.
  • Embody Microsoft's Culture and Values.

Travel Requirements

3 days / week in-office

Target Your Resume for "Principal Software Engineer - Azure AI Inferencing" , Microsoft

Get personalized recommendations to optimize your resume specifically for Principal Software Engineer - Azure AI Inferencing. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Principal Software Engineer - Azure AI Inferencing" , Microsoft

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Answer 10 quick questions to check your fit for Principal Software Engineer - Azure AI Inferencing @ Microsoft.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.

Microsoft logo

Principal Software Engineer - Azure AI Inferencing

Microsoft

Software and Technology Jobs

Principal Software Engineer - Azure AI Inferencing

full-timePosted: Oct 9, 2025

Job Description

Microsoft Azure AI Inference platform is the next generation cloud business positioned to address the growing AI market. We are on the verge of an AI revolution and have a tremendous opportunity to empower our partners and customers to harness the full power of AI responsibly. We offer a fully managed AI Inference platform to accelerate the research, development, and operations of AI powered intelligent solutions at scale. This team owns the hosting, optimization, and scaling the inference stack for all the Azure AI Foundary models including the latest and greatest from OpenAI, Grok, DeepSeek, and other OSS models.   Do you want to join a team entrusted with serving all internal and external ML workloads, solve real world inference problems for state-of-the-art large language (LLM) and multi-modal Gen AI models from OpenAI and other model providers? We are already serving billions of inferences per day on the most cutting-edge AI scenarios across the industry. You will be joining the CoreAI Inferencing team, influencing the overall product, driving new features and platform capabilities from preview to General Availability, and many exciting problems on the intersection of AI and Cloud.    We’re looking for a Principal Software Engineer - Azure AI Inferencing to drive the design, optimization, and scaling of our inference systems. In this role, you’ll lead engineering efforts to ensure our largest models run with exceptional efficiency in high-throughput, low-latency environments. You will get to work on and influence multiple levels of the AI Inference data plane stack.  We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served.   Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Locations

  • Redmond, Washington, United States, Redmond, Washington, United States

Salary

Estimated Salary Rangehigh confidence

220,000 - 320,000 USD / yearly

Source: ai estimated

* This is an estimated range based on market data and may vary based on experience and qualifications.

Required Qualifications

  • Bachelor’s degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, or Golang OR equivalent experience. (degree)
  • OR equivalent experience. (degree)
  • 4+ years’ practical experience working on high scale, reliable online systems (degree)
  • Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. (degree)
  • Technical background and foundation in software engineering principles, distributed computing and architecture (degree)
  • Experience in real-time online services with low latency and high throughput (degree)
  • Experience working with L7 network proxies and gateways (degree)
  • Knowledge in Network architecture and concepts (HTTP and TCP Protocols, Authentication and Sessions etc) (degree)
  • Knowledge and experience in OSS, Docker, Kubernetes, C++, Golang, or equivalent programming languages (degree)
  • Cross-team collaboration skills and the desire to collaborate in a team of researchers and developers (degree)
  • Ability to independently lead projects (degree)

Responsibilities

  • Lead the design and implementation of core inference infrastructure for serving frontier AI models in production.
  • Identify and drive improvements to end-to-end inference performance and efficiency of OpenAI and other state-of-the-art LLMs.
  • Lead the design and implementation of efficient load scheduling and balancing strategies, by leveraging key insights and features of the model and workload.
  • Scale the platform to support the growing inferencing demand and maintain high availability.
  • Deliver critical capabilities required to serve the latest and greatest Gen AI models such as GPT5, Realtime audio, Sora, and enable fast time to market for them.
  • Drive generic features to cater to the needs of customers such as GitHub, M365, Microsoft AI and third-party companies.
  • Collaborate with our partners both internal and external.
  • Mentor engineers on distributed inference best practices.
  • Embody Microsoft's Culture and Values.

Travel Requirements

3 days / week in-office

Target Your Resume for "Principal Software Engineer - Azure AI Inferencing" , Microsoft

Get personalized recommendations to optimize your resume specifically for Principal Software Engineer - Azure AI Inferencing. Takes only 15 seconds!

AI-powered keyword optimization
Skills matching & gap analysis
Experience alignment suggestions

Check Your ATS Score for "Principal Software Engineer - Azure AI Inferencing" , Microsoft

Find out how well your resume matches this job's requirements. Get comprehensive analysis including ATS compatibility, keyword matching, skill gaps, and personalized recommendations.

ATS compatibility check
Keyword optimization analysis
Skill matching & gap identification
Format & readability score

Answer 10 quick questions to check your fit for Principal Software Engineer - Azure AI Inferencing @ Microsoft.

Quiz Challenge
10 Questions
~2 Minutes
Instant Score

Related Books and Jobs

No related jobs found at the moment.