Decoding AI: Part 5, Navigating trust in the age of large language models and generative AI

Miri Rodriguez

T'Neil Walea

Keith Bauer

Greetings and welcome to Part 5 of our Decoding AI: A government perspective learning series! Previously, we went through the expansive capabilities of multimodal AI, spanning sectors from agriculture to urban living. In this module, we pivot to a matter of paramount importance: Trust in AI, with a focus on large language models and generative AI. — Keith Bauer, T’Neil Walea, Miri Rodriguez, Siddhartha Chaturvedi

The central role of trust in AI

Trust is not a single feeling, but a complex and dynamic phenomenon. It shapes how we perceive and interact with the world around us, including the artificial AI technologies that are becoming more prevalent in our lives.

The growing sophistication of Large Language Models (LLMs) and Generative AI (GAI) amplifies the need for these technologies to be transparent, ethical, and beneficial.

Image Decoding AI Part V image 1

Building Generative AI responsibly requires a fine balance across multiple facets.

A structured approach: Microsoft’s responsible AI framework

As we step into the complex, and at times, uncharted terrains of trust with AI, it becomes increasingly important to align to responsible guidelines and principles, such as Microsoft’s responsible AI framework.

This emphasizes six core pillars: fairness, reliability & safety, privacy & security, inclusiveness, transparency, and accountability.

Far from being industry jargon, these pillars can serve as actionable directives that guide development and ethical deployment of AI innovations.

Extending trustworthiness to generative AI

At the crossroads of technology and societal impact, the concept of trust turns from theoretical to practical: How can we trust technologies that can create text or talk like humans? The answer lies in multi-stakeholder collaboration, regular audits for bias and fairness, and real-time feedback mechanisms that adapt the technology as it interacts with its environment.

Practical next steps: Putting trust into action

To navigate this complex landscape, here are some actionable steps guided by Microsoft’s Responsible AI framework and can assist in your efforts to put “trust into action,” as you build the sandboxed environments or pilot new generative AI technologies:

  • Accountability: Assign clear roles and responsibilities for the development, deployment, and use of AI systems. Establish governance mechanisms to oversee and review the AI systems and their outcomes. Ensure that there are processes for reporting, auditing, and remediation of any issues or harm caused by the AI systems.
  • Inclusiveness: Engage with diverse stakeholders, including customers, users, employees, partners, regulators, and civil society groups, to understand their needs, expectations, and concerns regarding the AI systems. Design and test the AI systems with diverse data and feedback to ensure they are accessible, usable, and beneficial for everyone. Monitor and evaluate the AI systems for any potential exclusion or discrimination of any groups or individuals.
  • Reliability and safety: Define and adhere to quality standards and best practices for the development and deployment of AI systems. Conduct rigorous testing and evaluation of the AI systems before, during, and after deployment to ensure they perform reliably and safely under all conditions. Implement mechanisms for error detection, correction, and recovery in case of any failures or malfunctions of the AI systems.
  • Fairness: Identify and mitigate any sources of bias or unfairness in the data, algorithms, or outcomes of the AI systems. Use appropriate methods and tools to measure and monitor the fairness of the AI systems. Provide explanations and justifications for the decisions or actions of the AI systems. Ensure that there are processes for challenging or appealing the decisions or actions of the AI systems.
  • Transparency: Document and communicate the purpose, features, limitations, and intended uses of the AI systems. Provide clear and understandable information about how the AI systems work and how they affect people and society. Enable users to access, understand, and control their data and interactions with the AI systems. Disclose any relevant information about the data sources, methods, assumptions, or uncertainties of the AI systems.
  • Privacy and security: Respect and protect the privacy and security of the data and people involved in or affected by the AI systems. Follow applicable laws and regulations regarding data collection, storage, processing, sharing, and deletion. Use appropriate methods and tools to safeguard the data and the AI systems from unauthorized access, use, modification, or disclosure.

Get started today with the following resources to implement the 6 pillars of responsible AI with the following assets:

The road ahead

The trust equation in AI is a multifaceted and dynamic challenge, forever evolving as technology advances. It’s where computation not only becomes powerful but also acquires a layer of indeterminacy that could reshape our understanding of what’s possible in the AI realm.

Stay up to date 

Get news, updates and announcements on AI for US government delivered to your inbox, sign up here.

0 comments

Discussion is closed.

Feedback usabilla icon