Artificial intelligence has evolved from a basic tool to one of the most powerful resources available since the first chatbot ELIZA was deployed over five decad...
Artificial intelligence has evolved from a basic tool to one of the most powerful resources available since the first chatbot ELIZA was deployed over five decades ago.
Government agencies acknowledge AI’s potential and are striving to use it with their missions — the Defense Department created a chief digital and artificial intelligence office, the Defense Information Systems Agency plans to add generative AI to its upcoming 2023 tech watchlist and the White House released an AI Bill of Rights blueprint.
While these efforts have set the stage, the rapid development of large language model (LLM) and transformer AI used in tools like ChatGPT requires more specialized initiatives. Agencies responsible for the well-being and security of U.S. citizens and their data must tread carefully regarding widespread implementation.
Proceed with caution
In today’s increasingly interconnected world, agency leaders have begun investing in solutions to protect their virtual assets. ChatGPT is one of many tools that can bolster security, as its enhanced capabilities can provide more benefits than standard solutions.
Agencies can use most AI programs for threat detection, but ChatGPT can take security further by providing recommendations to address vulnerabilities. Additionally, it can enable security teams to respond proactively to data breaches and rapidly address cyberattacks.
Beyond threat monitoring, ChatGPT can create and improve security policies to support cybersecurity education. By producing accurate simulations of cyber threats and providing feedback on user responses, it can help government leaders better understand the risks posed by federal employees.
In addition to internal processes, institutions like the Central Intelligence Agency are considering ChatGPT as a tool to expand their knowledge of adversaries’ AI and machine learning capabilities.
On the flip side, ChatGPT can also present security risks as jailbreak attempts have already occurred by threat actors trying to create malware and facilitate cyberattacks using ChatGPT.
There is also the issue of data privacy. ChatGPT, by design, is constantly learning based on the information provided to it. All data input, whether personally identifiable information (PII) or data on the location of overseas troops, is now part of ChatGPT’s reference pool. For agencies dealing with highly classified information or citizen PII, using ChatGPT risks allowing malicious actors to access sensitive data or identify and track citizen activities — raising serious privacy concerns and potentially making citizens hesitant to interact with government agencies.
ChatGPT offers undeniable cybersecurity benefits, but adopting the technology without understanding potential dangers can cause widespread harm. Agencies must learn the technology and understand its risks before implementation to effectively protect against threats.
Accounting for ChatGPT’s limitations
In recent years, the government has emphasized the importance of improving the citizen experience (CX) to increase public trust. Achieving this goal is often challenging due to the complex and bureaucratic nature of federal operations, but ChatGPT is one easy way to provide personalized support to citizens.
For instance, citizens who want to learn about their eligibility for a government program can use ChatGPT to find accurate and customized information, effectively saving time and improving citizen interaction with the agency.
Another valuable service is using ChatGPT to create chatbots to answer common questions about agency programs, provide guidance on enrolling or changing federal plans, aid with claims and issues, and enable the agency’s human workforce with the time to assist citizens with more in-depth questions or problems.
But while these processes can enhance agencies’ CX efforts, various risks must be accounted for by government agencies.
Asking ChatGPT a question using unique wording may result in a response that reads as awkward or angry, or worse, is incorrect. If this is a citizen’s only interaction with the agency, they may walk away annoyed or unsatisfied with the information, decreasing their trust in the government.
Furthermore, ChatGPT cannot understand bias. It generates the most probable answers based on the information in its dataset. If the dataset is biased or incorrect, responses may perpetuate discrimination, having severe consequences if deployed on a large scale.
Integrating new technology into the delicate CX process requires the highest care and respect. Any agency looking to benefit from this powerful tool must strategically account for any potential limitations or risk factors for their specific organization.
What’s Next?
The most important factor when dealing with any new technology is educating ourselves on its capabilities and limitations.
Taking a careful, measured approach when implementing ChatGPT will allow developers and lawmakers the time needed to examine the potential impacts and prepare for integration. Moreover, it gives agencies time to practice working with the technology.
While ChatGPT isn’t a tool that can be used immediately, with the right time and effort, it will undoubtedly assist the government in creating secure, citizen-centric processes for all.
Brad Mascho is an AI Leader at Empower AI, as well as co-chair of the WashingtonExec AI Council and a member of both the AFCEA Technology Committee and the Executive Mosaic AI Group.
Unleashing the power of ChatGPT in government
Artificial intelligence has evolved from a basic tool to one of the most powerful resources available since the first chatbot ELIZA was deployed over five decad...
Artificial intelligence has evolved from a basic tool to one of the most powerful resources available since the first chatbot ELIZA was deployed over five decades ago.
Government agencies acknowledge AI’s potential and are striving to use it with their missions — the Defense Department created a chief digital and artificial intelligence office, the Defense Information Systems Agency plans to add generative AI to its upcoming 2023 tech watchlist and the White House released an AI Bill of Rights blueprint.
While these efforts have set the stage, the rapid development of large language model (LLM) and transformer AI used in tools like ChatGPT requires more specialized initiatives. Agencies responsible for the well-being and security of U.S. citizens and their data must tread carefully regarding widespread implementation.
Proceed with caution
In today’s increasingly interconnected world, agency leaders have begun investing in solutions to protect their virtual assets. ChatGPT is one of many tools that can bolster security, as its enhanced capabilities can provide more benefits than standard solutions.
Learn how federal agencies are preparing to help agencies gear up for AI in our latest Executive Briefing, sponsored by ThunderCat Technology.
Agencies can use most AI programs for threat detection, but ChatGPT can take security further by providing recommendations to address vulnerabilities. Additionally, it can enable security teams to respond proactively to data breaches and rapidly address cyberattacks.
Beyond threat monitoring, ChatGPT can create and improve security policies to support cybersecurity education. By producing accurate simulations of cyber threats and providing feedback on user responses, it can help government leaders better understand the risks posed by federal employees.
In addition to internal processes, institutions like the Central Intelligence Agency are considering ChatGPT as a tool to expand their knowledge of adversaries’ AI and machine learning capabilities.
On the flip side, ChatGPT can also present security risks as jailbreak attempts have already occurred by threat actors trying to create malware and facilitate cyberattacks using ChatGPT.
There is also the issue of data privacy. ChatGPT, by design, is constantly learning based on the information provided to it. All data input, whether personally identifiable information (PII) or data on the location of overseas troops, is now part of ChatGPT’s reference pool. For agencies dealing with highly classified information or citizen PII, using ChatGPT risks allowing malicious actors to access sensitive data or identify and track citizen activities — raising serious privacy concerns and potentially making citizens hesitant to interact with government agencies.
ChatGPT offers undeniable cybersecurity benefits, but adopting the technology without understanding potential dangers can cause widespread harm. Agencies must learn the technology and understand its risks before implementation to effectively protect against threats.
Accounting for ChatGPT’s limitations
In recent years, the government has emphasized the importance of improving the citizen experience (CX) to increase public trust. Achieving this goal is often challenging due to the complex and bureaucratic nature of federal operations, but ChatGPT is one easy way to provide personalized support to citizens.
For instance, citizens who want to learn about their eligibility for a government program can use ChatGPT to find accurate and customized information, effectively saving time and improving citizen interaction with the agency.
Read more: Commentary
Another valuable service is using ChatGPT to create chatbots to answer common questions about agency programs, provide guidance on enrolling or changing federal plans, aid with claims and issues, and enable the agency’s human workforce with the time to assist citizens with more in-depth questions or problems.
But while these processes can enhance agencies’ CX efforts, various risks must be accounted for by government agencies.
Asking ChatGPT a question using unique wording may result in a response that reads as awkward or angry, or worse, is incorrect. If this is a citizen’s only interaction with the agency, they may walk away annoyed or unsatisfied with the information, decreasing their trust in the government.
Furthermore, ChatGPT cannot understand bias. It generates the most probable answers based on the information in its dataset. If the dataset is biased or incorrect, responses may perpetuate discrimination, having severe consequences if deployed on a large scale.
Integrating new technology into the delicate CX process requires the highest care and respect. Any agency looking to benefit from this powerful tool must strategically account for any potential limitations or risk factors for their specific organization.
What’s Next?
The most important factor when dealing with any new technology is educating ourselves on its capabilities and limitations.
Taking a careful, measured approach when implementing ChatGPT will allow developers and lawmakers the time needed to examine the potential impacts and prepare for integration. Moreover, it gives agencies time to practice working with the technology.
Want to stay up to date with the latest federal news and information from all your devices? Download the revamped Federal News Network app
While ChatGPT isn’t a tool that can be used immediately, with the right time and effort, it will undoubtedly assist the government in creating secure, citizen-centric processes for all.
Brad Mascho is an AI Leader at Empower AI, as well as co-chair of the WashingtonExec AI Council and a member of both the AFCEA Technology Committee and the Executive Mosaic AI Group.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
Unleashing the power of ChatGPT in government
Unleashing the power of ChatGPT in government
Bolstering American statecraft with artificial intelligence technologies