Understanding the Microsoft governance model - AETHER + Office of Responsible AI

Completed

In keeping with our culture of integrity and trust at Microsoft, we knew it was essential to create a governance structure that would enable responsible AI innovation from the ground up. Our still-evolving governance system leverages what we have learned from our privacy and cybersecurity efforts, and relies upon centralized and decentralized functions to put our responsible AI principles into practice. From the senior leadership team to developers, field sellers and beyond, everyone is empowered to play a role in fostering responsible AI capabilities.

Learn about how we structure our responsible AI governance system and consider how you could apply these ideas in your own organization.

Introduction

While we knew that engaging with AI would require careful consideration and oversight, we didn’t have a perfect responsible AI governance system on day one. In fact, our governance system continues to evolve to this day. And that’s as it should be. A governance system should be agile to the changing nature of technology and the business.

From the beginning, we created a tight loop between engineering and our responsible AI advisory committee called Aether (AI, Ethics, and Effects in Engineering and Research). Engineering leadership serves on the committee and engineering practitioners are included in Aether working groups. We recommend forming an advisory committee as soon as you invest in AI engineering (even if it’s a year or more before products hit the market).

Initially, to govern responsible AI across the company, the Aether committee reviewed and advised on sensitive AI use cases as they arose. We have built upon this early case review model to add company-wide rules that set out practices designed to uphold our principles, education and training materials, a Responsible AI Champs program, and even tools that plug directly into sales and engineering workflows to promote responsible AI from the ground up.

Microsoft responsible AI governance structure

Our governance structure today uses a hub-and-spoke model to provide the accountability and authority to drive initiatives while also enabling responsible AI policies to be implemented at scale.

Centralized governance

The Senior Leadership Team is ultimately accountable for the company’s direction on responsible AI, setting the company’s AI principles, values, and human rights commitments. Building off our culture of integrity and trust, this group is the final decision-maker on the most sensitive, novel, and significant AI development and deployment matters.

Our commitment to responsible AI governance is administered, implemented, and maintained by the Office of Responsible AI. The office works with stakeholders across the company to develop and maintain our governance framework, define roles and responsibilities for governing bodies, implement a company-wide reporting and decision-making process, and orchestrate responsible AI training for all employees.

The Office of Responsible AI has four key functions:

  • Internal policy: Setting the company-wide rules for enacting responsible AI, as well as defining roles and responsibilities for teams involved in this effort.
  • Enablement: Readiness to adopt responsible AI practices, both within our company, and among our customers and partners.
  • Case management: Review of sensitive use cases to help ensure that our AI principles are upheld in our development and deployment work.
  • Public policy: Help to shape new laws, norms, and standards that will be needed to ensure that the promise of AI technology is realized for the benefit of society at large.

The Aether Committee serves an advisory role to the senior leadership and the Office of Responsible AI on questions, challenges, and opportunities with the development and fielding of AI technologies. Aether also provides guidance to teams across the company to ensure that AI products and services align with our AI principles. The committee brings together top talent in technology, ethics, law, and policy from across Microsoft to formulate recommendations on policies, processes, and best practices.

The Aether Committee has six working groups that focus on specific topics, grounded in our AI principles. The working groups play a key role in developing tools, best practices, and tailored implementation guidance related to their respective areas of expertise. Learnings from the working groups and main committee have resulted in Microsoft developing new policies, and in some cases either declining or placing limits on specific customer engagements where AI-related risks were high.

Together, Aether and the Office of Responsible AI work closely with our engineering and sales teams to help them uphold our AI principles in their day-to-day work. An important hallmark of our approach to responsible AI is having this ecosystem to operationalize responsible AI across the company, rather than a single organization or individual leading this work. Our approach to responsible AI also leverages our process of building privacy and security into all products and services from the start.

Decentralized governance

Enacting responsible AI at scale across an organization relies on a strong network across the company to help implement organization-wide rules, drive awareness, and request support on issues that raise questions about application of our AI principles.

Our network includes Responsible AI Champs, employees who are nominated by their leadership teams, from within key engineering and field teams to serve as responsible AI advisors (in addition to their full-time roles). Rather than devising the role as a policing function, they serve an advisory role to help inform decision-makers. The Responsible AI Champs have five key functions:

  • Raising awareness of responsible AI principles and practices within teams and workgroups
  • Helping teams and workgroups implement prescribed practices throughout the AI feature, product or service lifecycle
  • Advising leaders on the benefit of responsible AI development – and the potential impact of unintended harms
  • Identifying and escalating questions and sensitive uses of AI through available channels
  • Fostering a culture of customer-centricity and global perspective, by growing a community of Responsible AI evangelists in their organizations and beyond

To develop and deploy AI with minimal friction to engineering practices and customers, we are investing in patterns, practices, and tools. Some engineering groups have assembled teams to help them follow the company-wide rules and accelerate the development of implementation patterns, practices, and tools.

The final and perhaps most important part of our approach to responsible AI is the role that every employee plays with support from their managers and business leaders. Responsible AI is a key part of mandatory employee training and we have released additional educational assets that enable employees to delve deeper into areas of responsible AI. We also have numerous responsible AI development tools to enable our employees to develop responsibly. With these resources, all our employees are empowered to advance the company’s important work with AI and, at the same time, they are responsible for upholding our responsible AI principles and following the company-wide practices we have adopted in pursuit of that end.

Policies and procedures

To help every Microsoft employee live up to our commitment to developing and deploying responsible AI, we have created principles, a sensitive use framework, and company-wide rules to help employees develop a better understanding of the company’s commitment with respect to AI development and deployment.

We expect every Microsoft employee to:

  • Develop a general understanding of our AI principles
  • Report and escalate sensitive uses
  • Contact their Responsible AI Champ when they need guidance on responsible AI.

These ideas will be covered in more depth in the Governance in action at Microsoft unit.

Responsible AI Standard

In order to implement responsible AI practices, the policy requirements, procedures, and tools need to be tightly embedded with the AI development lifecycle an organization already uses. In a recent study conducted by Microsoft, researchers found that there are consistent stages in the AI development process, however, there is not a consistent or common set of tools that individuals use across the discipline.1 In order to assist teams in implementing policies and procedures, and to further operationalize our approach to responsible AI, Microsoft developed the Responsible AI Standard, which outlines the steps teams must follow to support the responsible development and deployment of AI systems. A key part of the Standard is organized by the Responsible AI Lifecycle (RAIL), helping to guide engineering teams through responsible AI considerations organized by stage of the AI development lifecycle. We’re still in learning mode with the Responsible AI Standard as we pilot it in different engineering and sales teams across the company, and we look forward to sharing our learnings with you in time.

Up next, see our governance model in action in flagging and addressing sensitive use cases of AI.