Don't miss out

Don't miss out

Don't miss out

Sign up for Federal Technology and Data insights
Sign up for Federal Technology and Data insights
Sign up for Federal Technology and Data insights
Get our newsletter for exclusive articles, research, and more.
Get our newsletter for exclusive articles, research, and more.
Get our newsletter for exclusive articles, research, and more.
Subscribe now

How agencies can create secure gen AI testing environments

How agencies can create secure gen AI testing environments
Nov 6, 2023
5 MIN. READ

How do I take advantage of generative AI without putting my agency data at risk? Federal leaders are concerned about data security and privacy in all things, and emerging technology is top-of-mind. Safety, security, and trust are the watchwords of the day with last week's Executive Order on AI detailing requirements for responsible AI usage.

And for good reason.

The large language models (LLMs) that generative AI (gen AI) use are powerful, but in most cases, they're hosted on external servers, and the data is passed back and forth for analysis. Any time information goes to a third party, like a server, there are security and privacy concerns. Government agencies particularly need to treat an LLM like any other third party and ensure the data is handled with integrity, encrypted in transmission, and not retained by the third party to retrain their models.

The best way agencies and corporations can begin using gen AI and LLMs safely is by establishing a secure practice, or sandbox, environment.

What is a secure sandbox environment for gen AI, and how do you get one?

Simply, it’s a limited AI solution that introduces the capabilities of a given system to a handful of users who will test it without sensitive information to see if it’s the right choice.

There are two ways to set up a gen AI sandbox environment. The simplest and most robust is to utilize a third-party provider. They’re all well-aware of security concerns and have different terms of service to explain where the data goes, how it's protected in transit, and what the data will and will not be used for, and some have fenced-off products available. The benefit of using a third-party provider is that they have the expertise and resources to get a secure solution up and running quickly. And with the number of options available now, there’s a good chance you can find one that meets your requirements securely rather than going the route of a public tool such as ChatGPT.

The other option is more technical, but you can deploy your own model in your own infrastructure. This way you have a solution without anything in the cloud at all. But hosting your own model is complicated: It requires the expertise not only to deploy it, but to manage it, maintain it when it breaks, and train people to use it. There’s also the matter of cost: By hosting your own model, you’re incurring the costs of the server itself, which can be prohibitively expensive.

Ensure security through a partner

An experienced partner can help you choose the right sandbox environment and make sure it’s secure. For example, we're helping CDC experiment with LLMs. We started by deploying an open-source model to CDC’s infrastructure to test it and demonstrate its benefits, knowing that it's safely contained within our environment. After experimenting with a self-hosted model, we helped CDC test and evaluate the usability of a third-party model hosted by a cloud provider—providing the agency with hands-on experience with both secure options.

Third parties can supply the hardware, but a partner can bring the experience and expertise to explain how the solution works and help you choose the one that's best fit for your scenario. A good partner should help demystify the black box that is gen AI—a trusted partner does not simply say, “We fixed it, it works;” instead they should say, “Here's how this works and why.” If it stays a black box, then when that partner goes away, you won’t know what you built or how to make good decisions on your own.

At ICF we’re experimenting with the options ourselves. We’re reading through the terms and conditions for our own scenarios, coming across issues in our own tests, and working with the third parties to troubleshoot so we can be confident in the systems and advise our clients better.

Progressing from a sandbox environment to an enterprise solution

Once you’re ready to scale your sandbox environment, experience with digital modernization is important. This is something a good partner must bring to the table, as creating stable enterprise solutions that rely on rapidly changing technology is a unique challenge. Combining technical expertise in this emerging domain with the proven experience of moving from small, tested solutions to larger, widely available solutions is essential for a trusted partner in this space.

Scalability is another benefit of building your sandbox environment with a third-party provider rather than building your own solution. Third-party gen AI can be scaled to your projects quickly and easily, whereas an internally built solution would require more hardware investment, on-call experts to keep it running, and a custom training program for staff.

On the other hand, the knowledge that comes from building your own sandbox environment is excellent for expanding to a larger, enterprise-level program: Through the build, deployment, and experimentation you learn every facet of how the solution works, what it can and can’t do, and why.

Keeping people at the center

When you’re rolling out a gen AI solution, especially for government, there needs to be employee education. Teaching employees how it works, the kinds of information they should submit, the kinds they should never submit, and whether the environment is safe for personal identifiable information is all necessary to keep the solution secure. But there’s a roadmap for this too: Agency leaders can build on their existing training for secure digital communications, like email and chat.

Generative AI is new, but it’s doable. It’s only uncomfortable—for now—because we are learning how to use it well. Thankfully the roadmap is already established and includes safe ways to harness the benefits without the risks.

Meet the author
  1. Eddie Kirkland, Principal Data Scientist

    Eddie is a statistics and data science expert with more than 20 years of experience in data and software engineering. He specializes in guiding data-rich projects from concept to delivery, working directly with clients to identify areas of need, developing custom solutions in an agile framework, and delivering clear and meaningful results. View bio

Your mission, modernized.

Subscribe for insights, research, and more on topics like AI-powered government, unlocking the full potential of your data, improving core business processes, and accelerating mission impact.