With today’s widespread focus on artificial intelligence, it is hard to imagine that this is not the first time AI has held a prominent place in the zeitgeist. Back in the mid-1980s, during the first computer revolution, AI was gaining ground as a significant field of research that could revolutionize the world. Then it stalled and the so-called “AI winter” began. The ideas were right, but computing technology was just not there.
The 1980s might not seem that long ago to those of us who were in high school or college then (wasn’t that just yesterday?), but in technological terms, it was an epoch ago. Fast forward to today. AI has come out of hibernation with a vengeance, propelled by enormous advances in computing capacity, and with many promises to society.
In few places is AI ready to make as profound an impact than in the missions of federal agencies. Yet while AI is gaining traction in the minds of federal managers, agencies still face several challenges — both technical and in mindset — that must be overcome first.
Federal momentum
If 2018 was the year AI entered the federal collective consciousness, 2019 is the year government starts seriously considering where and how it can help. In February 2019, President Donald Trump signed an executive order calling on the government to invest in, support and accelerate the use of AI initiatives in federal applications.
One of the challenges that came with the rapid re-entrance of AI into the federal mind is a muddying and confusion of the terms. The term “AI” is now being used to describe solutions on a long continuum of automation capabilities. Indeed, it is easy to see why: the Defense Advanced Research Projects Agency’s (DARPA) use and development of AI should be very different from another agency’s use of AI on a continuum of machine learning, such as the Agriculture Department’s National Institute of Food and Agriculture. Nevertheless, the first step agencies should take is to clearly identify how they define AI and what types of AI are appropriate for their various mission objectives.
For many government missions, the most useful AI isn’t what researchers call artificial general intelligence — that is AI that will be designed to think and reason for itself. Instead, the government should look for solutions that leverage AI intended to make human decisions in a singular high-volume but low-involvement function. In other words, the mundane tasks that, if automated, would free up resources for other more challenging or creative jobs.
A textbook example of this is in IT operations and cybersecurity, where AI is able to look at machine data from the world around them — log files from IT systems, Internet of Things (IoT) data, user behavior activity, etc. — and then use that to derive insights and automate decisions that are going to provide the most value.
Leveraged against the growing trend of improving citizen experience and engagement, such AI might be deployed to automate certain approvals, validate claims, accelerate permit processing or more. In the near future, I’d expect to see AI participating in everything from education loans to tax returns, drastically increasing the ability for the government to react to citizen demands, reduce inefficiencies and more. The applications of AI are far-reaching and widely open but first comes a little homework.
Find your dark data
As tempting as it is to deploy a solution that includes AI in its feature set and call it a day, deriving real value from AI is going to rely on quality data to train and support it. However, that is one area still challenging agencies — and the private sector as well.
A recent survey showed that public sector technologists and leaders estimated 56% of their data was still “dark” or “grey,” meaning it was unknown or unusable within the organization. In other words, agencies are missing more than half the picture of their operations. Magnified by the force multiplier that is data insights and AI, and it is hard to put a number on the value agencies are missing.
Getting a better hold on this dark data is a crucial first step agencies must take before adopting AI. Which brings me to my next point…
It is tempting, particularly in government, to follow this approach: Understand what data you have, think through the use cases and how it can be deployed, then solicit requests for proposals (RFPs) for a broad solution to achieve that mission. This approach has served government well for decades, but is simply not agile enough for the anticipated developments in machine learning and applications of AI, especially with the associated complexity and dynamism of data.
Data is growing too fast and becoming too complex to build and rely on a rigid data architecture designed to ingest, analyze, process and support specific data for a mission. This approach limits agility, flexibility and creativity as it forces data to live in formats contrary to its nature.
A more robust approach would be to understand what data you have access to, collect as much of this data as is feasible and then consider what uses there might be for it. Chances are collecting data you didn’t know you had or needed will reveal new uses for that data, new insights into the mission and ultimately better mission success. Indeed, numerous organizations that have taken this approach have proven that the effort will not be wasted.
For example, the National Ignition Facility at Lawrence Livermore National Laboratory leverages this approach to technology to enable a smooth user experience for world-renowned scientists and engineers, conducting experiments that ensure the country’s continued competitive advantage in scientific research. While collecting vast amounts of data from a wide range of sensors, including cameras, thermometers and motors, NIF applies algorithms in real time to identify anomalies before they become problems. As a result, NIF engineers can detect when these sensors begin to decay and perform predictive maintenance, avoiding unscheduled downtime for groundbreaking scientific experiments. A machine learning toolkit comes in handy when you don’t know the value of your data yet!
Just as federal agencies were driven to the cloud a decade ago but were unsure of how to proceed and how to leverage it, so too, agencies are being driven to artificial intelligence: uncertain of how to proceed and how to leverage it.
But as with cloud adoption, by all accounts a mainstay of government technology policy today, the best way to begin your data-driven journey and path toward AI is to dig into the data and start using and experimenting with it. Collect more than you need and do not worry about if it will be useful up front. The uses will make themselves clear and the value will come.
Adilson Jardim is the area vice president for public sector engineering at Splunk.
AI in government: What should it look like and how do we get there?
Adilson Jardim, the area vice president for public sector engineering at Splunk, explains why data is the key to making AI work in agencies.
With today’s widespread focus on artificial intelligence, it is hard to imagine that this is not the first time AI has held a prominent place in the zeitgeist. Back in the mid-1980s, during the first computer revolution, AI was gaining ground as a significant field of research that could revolutionize the world. Then it stalled and the so-called “AI winter” began. The ideas were right, but computing technology was just not there.
The 1980s might not seem that long ago to those of us who were in high school or college then (wasn’t that just yesterday?), but in technological terms, it was an epoch ago. Fast forward to today. AI has come out of hibernation with a vengeance, propelled by enormous advances in computing capacity, and with many promises to society.
In few places is AI ready to make as profound an impact than in the missions of federal agencies. Yet while AI is gaining traction in the minds of federal managers, agencies still face several challenges — both technical and in mindset — that must be overcome first.
Federal momentum
If 2018 was the year AI entered the federal collective consciousness, 2019 is the year government starts seriously considering where and how it can help. In February 2019, President Donald Trump signed an executive order calling on the government to invest in, support and accelerate the use of AI initiatives in federal applications.
Learn how federal agencies are preparing to help agencies gear up for AI in our latest Executive Briefing, sponsored by ThunderCat Technology.
A month later, fiscal 2020 budget proposals released from the White House showed the federal government was preparing to allocate some $4.9 billion into unclassified AI and machine learning research. A week after that, the administration launched AI.gov to be “the hub of all the AI projects being done across the agencies.” The federal AI/ML agenda is beginning to coalesce, but it is still without defined form.
Defining artificial intelligence
One of the challenges that came with the rapid re-entrance of AI into the federal mind is a muddying and confusion of the terms. The term “AI” is now being used to describe solutions on a long continuum of automation capabilities. Indeed, it is easy to see why: the Defense Advanced Research Projects Agency’s (DARPA) use and development of AI should be very different from another agency’s use of AI on a continuum of machine learning, such as the Agriculture Department’s National Institute of Food and Agriculture. Nevertheless, the first step agencies should take is to clearly identify how they define AI and what types of AI are appropriate for their various mission objectives.
For many government missions, the most useful AI isn’t what researchers call artificial general intelligence — that is AI that will be designed to think and reason for itself. Instead, the government should look for solutions that leverage AI intended to make human decisions in a singular high-volume but low-involvement function. In other words, the mundane tasks that, if automated, would free up resources for other more challenging or creative jobs.
A textbook example of this is in IT operations and cybersecurity, where AI is able to look at machine data from the world around them — log files from IT systems, Internet of Things (IoT) data, user behavior activity, etc. — and then use that to derive insights and automate decisions that are going to provide the most value.
Leveraged against the growing trend of improving citizen experience and engagement, such AI might be deployed to automate certain approvals, validate claims, accelerate permit processing or more. In the near future, I’d expect to see AI participating in everything from education loans to tax returns, drastically increasing the ability for the government to react to citizen demands, reduce inefficiencies and more. The applications of AI are far-reaching and widely open but first comes a little homework.
Find your dark data
As tempting as it is to deploy a solution that includes AI in its feature set and call it a day, deriving real value from AI is going to rely on quality data to train and support it. However, that is one area still challenging agencies — and the private sector as well.
A recent survey showed that public sector technologists and leaders estimated 56% of their data was still “dark” or “grey,” meaning it was unknown or unusable within the organization. In other words, agencies are missing more than half the picture of their operations. Magnified by the force multiplier that is data insights and AI, and it is hard to put a number on the value agencies are missing.
Getting a better hold on this dark data is a crucial first step agencies must take before adopting AI. Which brings me to my next point…
Read more: Commentary
Don’t over-design to specific use cases
It is tempting, particularly in government, to follow this approach: Understand what data you have, think through the use cases and how it can be deployed, then solicit requests for proposals (RFPs) for a broad solution to achieve that mission. This approach has served government well for decades, but is simply not agile enough for the anticipated developments in machine learning and applications of AI, especially with the associated complexity and dynamism of data.
Data is growing too fast and becoming too complex to build and rely on a rigid data architecture designed to ingest, analyze, process and support specific data for a mission. This approach limits agility, flexibility and creativity as it forces data to live in formats contrary to its nature.
A more robust approach would be to understand what data you have access to, collect as much of this data as is feasible and then consider what uses there might be for it. Chances are collecting data you didn’t know you had or needed will reveal new uses for that data, new insights into the mission and ultimately better mission success. Indeed, numerous organizations that have taken this approach have proven that the effort will not be wasted.
For example, the National Ignition Facility at Lawrence Livermore National Laboratory leverages this approach to technology to enable a smooth user experience for world-renowned scientists and engineers, conducting experiments that ensure the country’s continued competitive advantage in scientific research. While collecting vast amounts of data from a wide range of sensors, including cameras, thermometers and motors, NIF applies algorithms in real time to identify anomalies before they become problems. As a result, NIF engineers can detect when these sensors begin to decay and perform predictive maintenance, avoiding unscheduled downtime for groundbreaking scientific experiments. A machine learning toolkit comes in handy when you don’t know the value of your data yet!
Just as federal agencies were driven to the cloud a decade ago but were unsure of how to proceed and how to leverage it, so too, agencies are being driven to artificial intelligence: uncertain of how to proceed and how to leverage it.
But as with cloud adoption, by all accounts a mainstay of government technology policy today, the best way to begin your data-driven journey and path toward AI is to dig into the data and start using and experimenting with it. Collect more than you need and do not worry about if it will be useful up front. The uses will make themselves clear and the value will come.
Adilson Jardim is the area vice president for public sector engineering at Splunk.
Want to stay up to date with the latest federal news and information from all your devices? Download the revamped Federal News Network app
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
How data can improve your cyber situational awareness
How the move to dev/ops can bring confidence to IT transformation
Sandia’s synthetic network offers new insight into how cyber attackers work