Research Collection: Research Supporting Responsible AI

Published

Editor’s Note: In the diverse and multifaceted world of research, individual contributions can add up to significant results over time. In this new series of posts, we’re connecting the dots to provide an overview of how researchers at Microsoft and their collaborators are working towards significant customer and societal outcomes that are broader than any single discipline. Here, we’ve curated a selection of the work Microsoft researchers are doing to advance responsible AI. Researchers Saleema AmershiEce KamarKristin LauterJenn Wortman Vaughan, and Hanna Wallach contributed to this post.

Microsoft is committed to the advancement and use of AI grounded in principles that put people first and benefit society. We are putting these principles into practice throughout the company by embracing diverse perspectives, fostering continuous learning, and proactively responding as AI technology evolves.

Researchers at Microsoft are making significant contributions to the advancement of responsible AI practices, techniques and technologies – spanning areas of human-AI interaction and collaboration, fairness, intelligibility and transparency, privacy, reliability and safety, and other areas of research.

Multiple research efforts on responsible AI at Microsoft have been supported and coordinated by the company’s Aether Committee and its set of expert working groups. Aether is a cross-company board that plays a key role in the company’s work to operationalize responsible AI at scale, with efforts on formulating and making recommendations on issues and processes—and with hosting deep dives on technical challenges and tools around responsible AI.

Aether working groups focus on important opportunity areas, including human-AI interaction and collaboration, bias and fairness, intelligibility and transparency, reliability and safety, engineering practices, and sensitive uses of AI. Microsoft researchers actively lead and participate in the work of Aether, conducting research across disciplines and engaging with organizations and experts inside and outside of the company.

We embrace open collaboration across disciplines to strengthen and accelerate responsible AI, spanning software engineering and development to social sciences, user research, law and policy. To further this collaboration, we open-source many tools and datasets that others can use to contribute and build upon.

This work builds on Microsoft’s long history of innovation to make computing more accessible and dependable for people around the world – including the creation of the Microsoft Security Development Lifecycle, the Trustworthy Computing initiative, and pioneering work in accessibility and localization.

Responsible AI is really all about the how: how do we design, develop and deploy these systems that are fair, reliable, safe and trustworthy. And to do this, we need to think of Responsible AI as a set of socio-technical problems. We need to go beyond just improving the data and models. We also have to think about the people who are ultimately going to be interacting with these systems.”

Dr. Saleema Amershi, Principal Researcher at Microsoft Research and Co-chair of the Aether Human-AI Interaction & Collaboration Working Group

This page provides an overview of some key areas where Microsoft researchers are contributing to more responsible, secure and trustworthy AI systems. For more perspective on responsible AI and other technology and policy issues, check out our podcast with Microsoft President and Chief Legal Officer Brad Smith. For background on the Aether Committee, listen to this podcast with Microsoft’s Chief Scientist and Aether chair Eric Horvitz.

This is by no means an exhaustive list of efforts; read our blog (opens in new tab), listen to our podcast (opens in new tab), and subscribe to our newsletter (opens in new tab) to stay up to date on all things research at Microsoft.

Learn more about Microsoft’s commitment to responsible AI. For more guidelines and tools to help responsibly use AI at every stage of innovation, visit the Responsible AI resource center.

Microsoft Research Podcast

AI Frontiers: Models and Systems with Ece Kamar

Ece Kamar explores short-term mitigation techniques to make these models viable components of the AI systems that give them purpose and shares the long-term research questions that will help maximize their value. 

Fairness

The fairness of AI systems is crucially important now that AI plays an increasing role in our daily lives. That’s why Microsoft researchers are advancing the frontiers of research on this topic, focusing on many different aspects of fairness, including:

For those who wish to prioritize fairness in their own AI systems, Microsoft researchers, in collaboration with Azure ML, have released Fairlearn, an open-source Python package that enables developers of AI systems to assess their systems’ fairness and mitigate any negative impacts for groups of people, such as those defined in terms of race, gender, age, or disability status. Fairlearn, which focuses specifically on harms of allocation or quality of service, draws on two papers by Microsoft researchers on incorporating quantitative fairness metrics into classification settings and regression settings, respectively. Of course, even with precise, targeted software tools like Fairlearn, it’s still easy for teams to overlook fairness considerations, especially when they are up against tight deadlines. This is especially true because fairness in AI  sits at the intersection of technology and society and can’t be addressed with purely technical approaches. Microsoft researchers have therefore co-designed a fairness checklist to help teams reflect on their decisions at every stage of the AI lifecycle, in turn helping them anticipate fairness issues well before deployment.

Explore more


Transparency and Intelligibility

Intelligibility can uncover potential sources of unfairness, help users decide how much trust to place in a system, and generally lead to more usable products. It also can improve the robustness of machine learning systems by making it easier for data scientists and developers to identify and fix bugs. Because intelligibility is a fundamentally human concept, it’s crucial to take a human-centered approach to designing and evaluating methods for achieving intelligibility.  That’s why Microsoft researchers are questioning common assumptions about what makes a model “interpretable,” studying data scientists’ understanding and use of existing intelligibility tools and how to make these tools more useable, and exploring the intelligibility of common metrics like accuracy.

For those eager to incorporate intelligibility into their own pipeline, Microsoft researchers have released InterpretML, an open-source Python package that exposes common model intelligibility techniques to practitioners and researchers. InterpretML includes implementations of both “glassbox” models (like Explainable Boosting Machines, which build on Generalized Additive Models) and techniques for generating explanations of blackbox models (like the popular LIME and SHAP, both developed by current Microsoft researchers).

Beyond model intelligibility, a thorough understanding of the characteristics and origins of the data used to train a machine learning model can be fundamental to building more responsible AI. The Datasheets for Datasets (opens in new tab) project proposes that every dataset be accompanied by a datasheet that documents relevant information about its creation, key characteristics, and limitations. Datasheets can help dataset creators uncover possible sources of bias in their data or unintentional assumptions they’ve made, help dataset consumers figure out whether a dataset is right for their needs, and help end users gain trust.  In collaboration with the Partnership on AI (opens in new tab), Microsoft researchers are developing best practices for documenting all components of machine learning systems to build more responsible AI.

Explore more


Reliability and Safety

Reliability is a principle that applies to every AI system that functions in the world and is required for creating trustworthy systems. A reliable system functions consistently and as intended, not only in the lab conditions in which it is trained, but also in the open world or when they are under attack from adversaries. When systems function in the physical world or when their shortcomings can pose risks to human lives, problems in system reliability translate to risks in safety.

To understand the way reliability and safety problems occur in AI systems, our researchers have been investigating how blind spots in data setsmismatches between training environments and execution environments, distributional shifts and problems in model specifications can lead to shortcomings in AI systems. Given the various sources for failures, the key to ensuring system reliability is rigorous evaluation during system development and deployment so that unexpected performance failures can be minimized and system developers can be guided for continuous improvement. That is why Microsoft researchers have been developing new techniques for model debugging and error analysis that can reveal patterns that are correlated with disproportional error regions in evaluation data. Current efforts in this space include turning research ideas into tools for developers to use.

We recognize that when AI systems are used in applications that are critical for our society, in most cases to support human work, aggregate accuracy is not sufficient to quantify machine performance. Researchers have shown that model updates can lead to issues with backward compatibility (i.e., new errors occurring as a result of an update), even when overall model accuracy improves, which highlights that model performance should be seen as a multi-faceted concept with human-centered considerations.

Explore more


Human-AI Interaction and Collaboration

Advances in AI have the potential to enhance human capabilities and improve our lives. At the same time, the complexities and probabilistic nature of AI-based technologies presents unique challenges for safe, fair, and responsible human-AI interaction. That’s why Microsoft researchers are taking a human-centered approach to ensure that what we build benefits people and society, and that how we build it begins and ends with people in mind.

A human-centered approach to AI starts with identifying a human or societal need and then tailor-making AI technologies to support that need. Taking this approach, Microsoft researchers are creating new AI-based technologies to promote human and societal well-being including technologies to augment human capabilities, support mental health and focus and attention, and to understand the circulation patterns of fake news.

A human-centered approach to technology development also emphasizes the need for people to effectively understand and control those technologies to achieve their goals. This is inherently difficult for AI technologies that behave in probabilistic ways, may change over time, and are based on possibly multiple complex and entangled models. Microsoft researchers are therefore developing guidance and exploring new ways to support intuitive, fluid and responsible human interaction with AI including how to help people decide when to trust an AI or when to question it, to set appropriate expectations about an AI system’s capabilities and performance, to support the safe hand-off between people and AI-based systems, and to enable people and AI systems to interact and collaborate in physical space.

Finally, a human-centered approach to responsible AI requires understanding the unique challenges practitioners face building AI systems and then working to address those challenges. Microsoft researchers are therefore studying data scientistsmachine learning software engineers, and interdisciplinary AI-UX teams, and creating new tools and platforms to support data analysischaracterizing and debugging AI failures, and developing of human-centered AI technologies such as emotion-aware and physically situated AI systems.

Explore more


Private AI

Private AI is a Microsoft Research project to enable Privacy Preserving Machine Learning (PPML).  The CryptoNets paper (opens in new tab) from ICML 2016 demonstrated that deep learning on encrypted data is feasible using a new technology called Homomorphic Encryption.  Practical solutions and approaches for Homomorphic Encryption were pioneered by the Cryptography Group at Microsoft Research in this 2011 paper (opens in new tab) which showed a wide range of applications for providing security and privacy in the cloud for healthcare, genomics, and finance.

Homomorphic Encryption allows for computation to be done on encrypted data, without requiring access to a secret decryption key. The results of the computation are encrypted and can be revealed only by the owner of the key. Among other things, this technique can help to preserve individual privacy and control over personal data.

Microsoft researchers  (opens in new tab)have been working to make Homomorphic Encryption simpler and more widely available, particularly through the open-source SEAL  (opens in new tab)library. To learn more, listen to this podcast (opens in new tab), take this webinar (opens in new tab) on Private AI from the National Academies, or this webinar (opens in new tab) from Microsoft Research on SEAL.

Explore more


Partnerships and Support for Student Work

Working with Academic and Commercial Partners

Microsoft Research supports and works closely with Data & Society (opens in new tab), which is committed is committed to identifying thorny issues at the intersection of technology and society, providing and encouraging research that can ground informed, evidence-based public debates, and building a network of researchers and practitioners who can anticipate issues and offer insight and direction. Microsoft Research also supports the AI Now Institute (opens in new tab) at New York University, an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence.

Microsoft is also a member of the Partnership on AI (opens in new tab) (PAI), a multi-stakeholder organization that brings together academics, researchers, civil society organizations, companies building and utilizing AI technology, and other groups working to better understand AI’s impacts. Microsoft researchers are contributing to a number of PAI projects, including the ABOUT ML work referenced above.

Supporting Student Work on Responsible AI

The Ada Lovelace Fellowship and PhD Fellowship continue a Microsoft Research tradition of providing promising doctoral students in North America with funding to support their studies and research. Many of these fellowships’ 2020 recipients are doing work to advance the responsible and beneficial use of technology, including enhancing fairness in natural language processing, reducing bias, promoting social equality, and improving the mental, emotional, and social health of people with dementia.


Cited publications

Continue reading

See all blog posts