Decoding AI: Part 6, Creating boundary conditions in generative AI

Siddhartha Chaturvedi

Miri Rodriguez

Welcome to Part 6 of our Decoding AI: A government perspective learning series! In our previous blog, we discussed the importance of trust in AI, especially for large language models and generative AI. In this module, we dive deeper into how to create boundary conditions in generative AI, the topic on everybody’s mind, as we explore the boundaries (pardon the pun) of Generative AI – the constraints or rules that guide the outputs of these technologies, and why they are essential for the US government. — Siddhartha Chaturvedi, Miri Rodriguez

What are boundary conditions and why do they matter?

Boundary conditions are the limits or specifications that are applied to generative AI systems, such as large language models (LLMs), which are neural networks that can process and generate natural language at scale. Boundary conditions can affect various aspects of the outputs of generative AI, such as length, format, style, tone, or base content.

Boundary conditions matter because they can help ensure that generative AI produces outputs that are relevant, accurate, and appropriate for the intended domain, task, and audience. Boundary conditions can also help prevent or mitigate potential risks or harms that generative AI can cause, such as errors, inconsistencies, inaccuracies, bias, noise, distortion, or misuse.

As we deal with complex and sensitive issues, such as national security, public health, or social justice, it’s critical that we test our AI systems for any divergences from the existing state of the art. Generative AI can be a powerful tool for the US Federal Government, but it also requires careful and responsible use, keeping in mind the quality, reliability, and trustworthiness of the information and services that the government provides to the public.

Image Decoding AI Part VI image 1

As we see the implications of Generative AI across multiple industries, we need to bring together technology patterns, and industry standards to work in conjunction, to create robust boundary conditions.

How to create boundary conditions in generative AI?

Creating boundary conditions in generative AI is not a one-size-fits-all solution, but rather a context-dependent and iterative process, that involves various techniques and tools. Some of the most common and widely used ones are:

Prompt-engineering: This is the most-used technique, and perhaps the most useful to get started while experimenting. It involves providing a short natural language input or prompt to an LLM, and letting it generate a longer and coherent output. Prompt-engineering can help create boundary conditions by specifying the desired outcome, such as the type, length, or format of the output.

For example, a prompt-engineering could be “Write a tweet of no more than 280 characters, summarizing the main point of the following article:” followed by the URL or text of the article. The LLM would then generate a tweet based on the prompt and the article. {My personal favorite prompt is “Format this for <insert the persona>” and watch it reformat an article or a body of text.}

Grounding: This is a technique that involves providing additional information or context to an LLM, such as facts, data, or references, to help it generate more accurate and relevant outputs. Grounding can help create boundary conditions by specifying the source, scope, or quality of the output.

For example, a grounding could be “According to the US Census Bureau, the population of the United States was 331.4 million as of April 1, 2020.” The LLM would then use this information to generate outputs that are consistent with the grounding.

Fine-tuning: For specific use cases we might need to be re-training or adapting an LLM on a specific domain or task, using a smaller and more focused dataset, to improve its performance and quality. Fine-tuning can help create boundary conditions by specifying the domain, task, or style of the output.

For example, an LLM could be fine-tuned on a dataset of government documents, to make it more suitable for generating or analyzing government texts.

We are still seeing some early experimentation in this, where the conversation of grounding versus fine-tuning seems to be taking the world by storm, and the answer is the quintessential “it depends.” {We are happy to discuss this in our next office hours, hope to see you there}

How can I get started with my own Copilot?

Depending on your use case, your deployment methodology, and the tools that you like to use, you could get started to create these boundary conditions by using tested and hardened AI services, grounding in your data, building new copilots based on existing architectures and infrastructure, or even training your own models.

  • Microsoft Azure Cognitive Services: Some of the more well-defined tasks such as entity recognition, translation, have been tested over the years, and for those specifics tasks we should look at reusing well documented services where necessary, using AI orchestration tools like Azure Machine Learning Studio to stich them together.
  • Microsoft Power Platform: This is a low-code or no-code platform that enables users to create apps, workflows, chatbots, and dashboards, using AI capabilities. Some of the components that are relevant for generative AI are Power Apps, which can create custom applications with AI features; Power Automate, which can automate tasks and processes with AI triggers and actions; and Power Virtual Agents, which can create conversational agents with natural language capabilities.
  • Microsoft Copilot Studio: Announced at Microsoft Ignite, this is a tool that mainly enhances Microsoft 365 Copilot, allowing you to adjust the Copilot in Microsoft 365 to add datasets, automation processes, and even your own copilots. This is an excellent way of leveraging the existing tools, with new intelligence and new capabilities.
  • Azure Machine Learning Studio: This is a cloud-based platform that enables developers and data scientists to build, train, and deploy machine learning models, including LLMs and generative AI systems. Azure Machine Learning provides tools and frameworks for data preparation, model development, experimentation, automation, and deployment, as well as security and governance features.

The road ahead

Creating boundary conditions in generative AI is a challenging but rewarding endeavor, that can help you leverage the power and potential of these technologies, while ensuring their quality, reliability, and trustworthiness.

In our next discussion, we will look at the future of bringing the entire ecosystem together, and hearing from our partners who have been on this journey with us. It’s where computation not only becomes powerful but also acquires a layer of indeterminacy that could reshape our understanding of what’s possible in the AI realm.

Stay up to date 

Get news, updates and announcements on AI for US government delivered to your inbox, sign up here.

0 comments

Discussion is closed.

Feedback usabilla icon