Responsible bots: 10 guidelines for developers of conversational AI

More people are using bots in their everyday lives, whether it’s to get a quick answer to a customer service problem or to help people out with things like managing their calendars, checking the weather or ordering pizza. Bots, or more generally, conversational AI, have the ability to help people achieve more, and we are only starting to see their potential to augment what we can do.

In order for people and society to realize the full potential of bots, they need to be designed in such a way that they earn the trust of others. These guidelines are aimed at helping you to design a bot that builds trust in the company and service that the bot represents. These guidelines are not intended as legal advice and you should separately ensure that your bot complies with the fast-paced developments in the law in this area. Also, in designing your bot, you should consider a broad set of responsibilities you have when developing any data-centric AI system, including ethics, privacy, security, safety, inclusion, transparency and accountability. See, for example, Microsoft’s six principles for the responsible development of AI published in the January 2018 book, The Future Computed (opens in new tab).

Learn more about our approach to AI (opens in new tab)

Microsoft introduces guidelines for developing responsible conversational AI (opens in new tab)
Official Microsoft Blog | November 14, 2018