Summary and resources

As AI becomes more prevalent, it's imperative that organizations have processes in place to ensure that it's used responsibly. While we recognize that we don't have all of the answers, we hope that our experience and perspective will prove valuable to others as they embark on their own AI journey.

Now that you have reviewed this module, you should be able to:

  • Explain how Microsoft provides accountability for responsible AI through a governance structure.
  • Describe how Microsoft risk management processes are used to identify, assess, and mitigate risks.
  • Establish responsible design principles within your own organization.

Use these resources to discover more

Tip

To open a hyperlink, right-click and choose "Open in new tab or window." That way you can check out the resource and easily return to the module.

To learn more about our perspective on responsible AI as well as the impact of AI on our future, read our book The Future Computed.

  • Download PDF of Understanding AI governance at Microsoft.
  • Download PDF of Governance in action at Microsoft.
  • Download PDF of Establishing responsible design principles in AI engineering to share with others.
  • Download PDF of Engaging externally: AI for Good to share with others.
  • Download PDF of Putting principles into practice: how we approach responsible AI at Microsoft.

Below are some additional resources your organization can leverage when developing your own governance model for responsible AI.

Skill up resources

  • 2018 WEF Future of Jobs Report states many companies have been focusing their upskilling and retraining efforts on those people who already have higher skills and value to the company.
  • Developer-focused AI School, which provides online videos and other assets that help build professional AI skills.
  • The Skillful Initiative, a partnership with the Markle Foundation in the US, helps match people with employers and fill high-demand jobs.

Microsoft programs

AI for Good includes four programs: AI for Accessibility, AI for Earth, AI for Humanitarian Action, and AI for Cultural Heritage, which are already supporting nearly 250 projects across the globe. Learn more about how to protect against new AI-specific security threats by reading our paper, Securing the Future of Artificial Intelligence and Machine Learning at Microsoft and follow along with the news. For example, last year we publicly called for regulation of facial recognition technology and outlined our recommendations for the public and private sector alike.

Principles and guidelines

Microsoft's responsible AI journey began when we established six key principles to guide our development and use of AI, which are outlined in The Future Computed: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

With these foundational principles in place, we began developing more scenario-specific guidelines. For example, in May of 2019, we published a paper called Guidelines for Human-AI Interaction, which includes 18 generally applicable design guidelines to help developers design responsible and human-centered AI systems. In addition to this key resource, we have published a number of other guidelines and principles including the ones below:

Engineering tools for responsible AI

Security and privacy

Resource type Details

Guidance

Open-source code
  • Homomorphic encryption is a special type of encryption technique that allows users to compute on encrypted data without decrypting it. The results of the computations are encrypted and can be revealed only by the owner of the decryption key. To further the use of this important encryption technique, we developed Microsoft SEAL and made it open-source.

Technologies
  • Secure execution environments such as Azure confidential computing help users secure data while it's "in use" on public cloud platforms (a state required for efficient processing). The data is protected inside a Trusted Execution Environment (TEE), also known as an enclave, such that code and data are protected against viewing and modification from outside of the TEE. This has a number of benefits, including the ability to train AI models using data sources from different organizations without sacrificing data confidentiality.
    • The Azure team has worked with Microsoft Research, Intel, Windows, and our Developer Tools group to develop our confidential computing solution, which enables developers to take advantage of different TEEs without having to change their code.
    • The Open Enclave SDK project provides a consistent API surface for developing apps using enclave-based computing.
  • Multi-party computation (MPC) allows a set of parties to share encrypted data and algorithms with each other while preserving input privacy and ensuring that no party sees information about other members. For example, with MPC we can build a system that analyzes data from three different hospitals without any of them gaining access to each other's health data.
  • Differential privacy is a key technology for training machine learning models using private data. A differentially private algorithm uses random noise to ensure that the model output doesn't noticeably change when one individual in the dataset changes. This prevents attackers from inferring an individual's private information from the model's output.

Tools

Fairness

Resource type Details

Guidance
  • Read this paper from the ACM Conference on Fairness, Accountability, and Transparency: Fairness and Abstraction in Sociotechnical Systems that explains five key "traps" of fair-ML work and how to avoid them.
  • Read this paper from Cornell University: Counterfactual Fairness for an example of a framework for modeling fairness using tools from causal interference, and how it applies to the fair prediction of student success in law school.

Open-source code
  • Fairness in Machine Learning (ML) Systems (Fairlearn) is an approach created by Microsoft Research and codeveloped with products teams. FairLearn can be used to assess the potential unfairness of ML systems that make decisions about allocating resources, opportunities, or information. Fairness is a fundamentally sociotechnical challenge, so "fair" classification tools are not be-all-and-end-all solutions, and they are only appropriate in particular, limited, circumstances. A Python package that implements this approach is available on GitHub.
    • For example, consider an ML system tasked with choosing applicants to interview for a job. FairLearn can turn a classifier that predicts who should be interviewed based on previous hiring decisions into a classifier that predicts who should be interviewed while also respecting demographic parity (or another fairness definition).

Tools

Further training
  • To understand the unique challenges regarding fairness in ML, watch our free webinar on Machine Learning and Fairness. In this webinar, you'll learn how to make detecting and mitigating biases a first-order priority in your development and deployment of ML systems.
  • For more on how organizations should approach assessing the fairness of their AI models, watch this NIPS keynote address from Kate Crawford, Principal Researcher at Microsoft and Cofounder of the AI Now Institute at NYU.

Inclusiveness

Resource type Details

Guidance
  • Reference the Inclusive Design toolkit and inclusive design practices to learn how to understand and address potential barriers in a product environment that could unintentionally exclude people.
  • The Microsoft Research paper Algorithmic greenlining is an approach that app developers or decision-makers can use to develop selection criteria yielding high-quality and diverse results in contexts such as college admissions, hiring, and image search.
    Take, for example, choosing job candidate search criteria. There's typically limited information about any candidate's "true quality." An employer's intuition might suggest searching for "computer programmer," which yields high-quality candidates but might return few female candidates. The greenlining algorithm suggests alternative queries that are similar but more gender-diverse, such as "software developer" or "web developer."

Reliability and safety

Resource type Details

Technologies
  • The Data Drift Monitoring feature in Azure Machine Learning detects changes in the distribution of data that may cause degraded prediction performance, enabling developers to maintain accuracy by adapting the model to reflect changing data.

Tools
  • Pandora is a debugging framework designed by Microsoft Research to identify reliability and bias problems within machine learning models. It uses interpretable machine learning techniques, such as decision trees, to discover patterns and identify potential issues.
  • Microsoft AirSim is a valuable open-source tool for improving simulated training environments.

Transparency

Resource type Details

Open-source code
  • InterpretML is an open-source package created by Microsoft Research for training interpretable models and explaining black box systems. It implements a number of intelligible models including Explainable Boosting Machine (EBM), an improvement over generalized additive models that has both high accuracy and intelligibility. It also supports several methods for generating explanations of black box model behavior or predictions including 'SHapley Additive exPlanations' (SHAP) and 'Local Interpretable Model-agnostics Explanations' (LIME).

Technologies
  • Azure Machine Learning has a variety of tools that support model transparency. The Model Interpretability feature enables model designers and evaluators to explain why a model makes the predictions it does, which can be used to debug the model, validate that its behavior matches objectives, and check for bias.

Accountability

Resource type Details

Guidance
  • Datasheets for datasets is a paper that encourages people assembling training datasets to generate a datasheet with key information such as the motivation, composition, collection process, and recommended uses. Datasheets for datasets have the potential to increase transparency and accountability within the machine learning community, mitigate unwanted biases in machine learning systems, facilitate greater reproducibility of machine learning results, and help researchers and practitioners select more appropriate datasets for their chosen tasks.
  • The Partnership on AI (PAI) is leading a multi-stakeholder initiative called ABOUT ML to develop, test, and promulgate best practices for machine learning documentation. These best practices may include documenting how AI systems were designed and for what purposes, where their data came from and why that data was chosen, how they were trained, tested, and corrected, and what purposes they're not suitable for.

Technologies
  • The DevOps feature in Azure Machine Learning (called MLOps) makes it easier to track, reproduce, and share models and their version histories. It offers centralized management throughout the entire model development process, and helps teams monitor model performance by collecting application and model telemetry.

References

  1. Microsoft, "Software Engineering for Machine Learning: A Case Study." Saleema Amershi, Andrew Begel, Christian Bird, Robert DeLine, Harald Gall, Ece Kamar, Nachiappan Nagappan, Besmira Nushi, Thomas Zimmermann, 2019.

  2. Reuters, "Microsoft turned down facial-recognition sales on human rights concerns." Joseph Menn, 17 April 2019.

  3. Microsoft, "The Future Computed: Artificial Intelligence and its role in society." Brad Smith and Harry Shum, 17 Jan 2018.

  4. Microsoft, "Guidelines for Human-AI Interaction." Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, Eric Horvitz, May 2019.

  5. Microsoft, "Responsible bots: 10 guidelines for developers of conversational AI." November 2018.

  6. Microsoft Azure, "Transparency Note: Azure Cognitive Services Face API." 6 May 2019.

  7. Microsoft, "Six principles to guide Microsoft's facial recognition work." Rich Sauer, 17 December 2018.

  8. Microsoft, "Facial recognition technology: The need for public regulation and corporate responsibility." Brad Smith, 13 July 2018.

  9. Microsoft, "Project Laplace."

  10. Microsoft, "Securing the future of AI and machine learning at Microsoft." Andrew Marshall, 7 February 2019.

  11. Microsoft, "Microsoft SEAL."

  12. National Science Foundation, "A Private Data Sharing Interface."

  13. University of Chicago, "Aequitas."

  14. Microsoft, "A Reductions Approach to Fair Classification." Association for Computing Machinery, March 2018.

  15. NIPS, "Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings." Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai, 21 July 2016.

  16. SSRN, "Fairness and Abstraction in Sociotechnical Systems." Andrew Selbst, Danah Boyd, Sorelle Friedler, Suresh Venkatasubramanian, Janet Vertesi, 23 August 2018.

  17. Cornell University, "Counterfactual Fairness." Matt J. Kusner, Joshua R. Loftus, Chris Russell, Ricardo Silva, 8 March 2018.

  18. GitHub, FairLearn

  19. NIPS, "The Trouble with Bias – NIPS 2017 Keynote." Kate Crawford, 5 December 2017.

  20. Microsoft Research Webinar Series, "Machine Learning and Fairness." Jenn Wortman Vaughan, Hanna Wallach. 2018.

  21. Microsoft, "Inclusive Design."

  22. Microsoft, "Inclusive Microsoft Design Toolkit." 2016.

  23. Microsoft, "Algorithmic Greenlining: An Approach to Increase Diversity." Christian Borgs, Jennifer Chayes, Nika Haghtalab, Adam Tauman Kalai, Ellen Vitercik, January 2019.

  24. Microsoft Research, "Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure." Besmira Nushi, Ece Kamar, Eric Horvitz.

  25. GitHub, AirSim

  26. GitHub, InterpretML

  27. Microsoft, "Creating AI glass boxes – Open sourcing a library to enable intelligibility in machine learning." 10 May 2019.

  28. Cornell University, "Datasheets for Datasets." Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, Kate Crawford, 14 April 2019.

  29. Partnership on AI, ABOUT ML (Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles)