Insight by Microsoft

AI experimentation unlocks enhanced mission capabilities

By testing where GenAI does—and doesn’t— add value, agencies can determine how these systems can transform operations. But achieving this also requires...

This content is provided by Microsoft.

Agencies across the government are using generative AI (GenAI) tools to accelerate operational tasks with promising results. From automating content creation to finding insights hidden in an ocean of data, GenAI frees agency employees at every level to focus on higher-level work and decision-making.

As missions evolve, the use cases for these tools also need to adapt. AI models, such as OpenAI’s ChatGPT, are becoming more capable. GenAI is poised to become infused in an expanding array of mission-centric processes.

Experimentation is the key to uncovering game-changing AI-powered capabilities. By testing where GenAI does—and doesn’t— add value, agencies can determine how these systems can transform operations. But achieving this also requires understanding the persistent myths about GenAI’s risks and accuracy.

Experiment to take advantage of GenAI’s interactivity

Many mission use cases currently focus on automating tedious tasks, such as summarizing vast amounts of data or document creation. Condensing the high points of a 400-page document—or a warehouse full of 400-page documents—can quickly give human decision-makers the insight they need. Similarly, creating an RFP or technical document in minutes instead of days can enable users to stay ahead of changing conditions.

The near future will bring extensive multi-modal searches that can include images, video, sensor and satellite input, and much more to provide a more complete result. This capability may soon lead to functions such as mission planning, a perfect opportunity for experimentation.

For example, if the imagery in a mapping application is damaged or not high-resolution enough, a user could ask the system, “Create a mission plan to generate imagery from specific locations to provide these details.” Users can quickly finalize the output and operationalize the plan.

“Can we do this?”

The essential question to ask is, “What if?” Different missions may need specific AI capabilities, such as geolocation and time frame, that are only now becoming more available. Trying new use cases in limited pilot programs or sandboxes can quickly prove if an idea is worth pursuing.

A secure cloud platform makes it straightforward to stand up a test environment, enabling teams to mimic real-world functions using actual data and systems without impacting current operations.

Roadblocks to experimentation

Misgivings about GenAI persist, whether valid or not, and these issues can impede new use cases. Overall, there are three main areas of concern:

  1. Accuracy — The media frequently focuses on how the results from GenAI tools may be off-base. The culprit may be a lack of mission-specific data. A process called Retrieval Augmented Generation fine-tunes the model using data the user provides, both internal information and specific third-party sources.Bringing in information from outside the model’s initial training provides context and details that make results more accurate and relevant. Contrary to common belief, this does not create new security risks.
  2. Security and privacy — A persistent myth is that user inputs and prompts become part of the system’s training. In reality, “your data is your data.” It stays within the agency’s secure environment and never reaches the model’s developer or cloud providers. For example, Microsoft’s Azure OpenAI Service provides a platform to use the model’s capabilities, but all of the platform’s protections still apply.Private and personal information remains within the agency’s domain, and all security protocols still apply, including authentication and access controls. Choosing a cloud platform and tools that are inherently secure by design is crucial, but using GenAI in this way does not add an agency’s proprietary data to the AI model’s training.
  3. Usefulness — For the foreseeable future, the path forward for GenAI is to augment personnel, like an intelligent assistant or copilot. Integrating GenAI into mission applications will allow humans to focus on decision-making instead of tedious, repetitive tasks.GenAI simply puts a layer of automation on top of human language in response to natural language prompts. In every case, people decide whether or not to accept the results and apply them to mission tasks.

Empowering the human in the loop

Similar to reducing the cognitive load on a pilot, the essential purpose of GenAI is to provide the user with comprehensive, timely information to simplify decision-making. When integrated into mission applications and workflow, GenAI will provide decision support automation freeing mission operators to focus on achieving the objective.

The key to experimentation is finding a workflow, process, or task that could be accelerated or improved through automation — from training simulations to predictive analytics — and testing it using real-world data in a secure environment.

Working with a trusted partner to create that testbed in the cloud can save time while providing confidence in the outcomes. By embracing experimentation with GenAI, agencies can unlock its potential to drive mission innovation, evolution, and enhancement.

Read more:  Classified Cloud for the US Federal Government (microsoft.com)

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    Getty Images/Khanchit Khirisutchalualtechnology control law ai concept for AI ethics and Developing artificial codes of ethics.Compliance, regulation, standard, and responsibility for guarding against

    DoD sunsets Task Force Lima, introduces AI rapid capabilities cell

    Read more