The growing momentum of disinformation within cyber warfare
Deploying coordinated disinformation and influence campaigns as a means of cyber warfare is becoming an increasingly popular tactic for nefarious agents looking to...
Today, federal defense agencies and information technology professionals have many tools available to them which aid in combating cyber-attacks from foreign and domestic threats. Deploying coordinated disinformation and influence campaigns as a means of cyber warfare is becoming an increasingly popular tactic for nefarious agents looking to target the public sector. Open-source, text-generating AI models like GPT-3 are growing in popularity and capable of quickly and intelligently creating false narratives. When deployed by bad actors, these tools can expand the reach and effectiveness of disinformation campaigns. Now is the time for defense agencies to prioritize programs that can spot and react to these threats just as they do with more tangible cyber-attacks.
Taking action
More than half (53%) of U.S. adults report getting their news from social media ‘often’ or ‘sometimes,’ according to a September 2020 Pew Research Center Study. Online authors — bloggers, media outlets, social media users — are now go-to, trusted news sources for many Americans and need to be treated as such by defense professionals analyzing online sentiment and tracking the progression of false narratives. Growing trust and reliance on social media news sources is on a collision course with broader availability of tools which empower advanced, intelligent disinformation campaigns. As the public becomes increasingly desensitized to nefarious efforts online, federal defense agencies need to play a more active role in monitoring for and acting against these campaigns, particularly when they target government, defense, and public sector entities.
The defense community must integrate technology platforms that deploy AI and machine-learning algorithms capable of combating bad actors through real-time analysis and reporting on threats of nefarious influence on systems and infrastructure as well as the online communities citizens turn to for trusted information. Through these semi-supervised ML algorithms, monitoring platforms can detect disinformation, helping to flag unusual activity. Over time, the algorithm will monitor for new trends and behavior patterns as it learns from the communities it has been analyzing. It can identify initial suspicious conversations that may lead to cyberattacks or influence campaigns. These platforms may offer cluster analysis, which allows agencies to examine similarities among online communities spreading disinformation and analyze the methods and effects of these attacks to better discern their origin.
Threat to public opinion
As this type of disinformation gets out into the public more broadly, we become susceptible. We’ve seen the effect of vaccine disinformation throughout the pandemic, along with other national disinformation campaigns like election security. Sadly, U.S. citizens are all too familiar with hacks, leaks and disinformation campaigns targeting prominent figures, elections and government entities. The 2016 Wikileaks email hack of candidate Hillary Clinton, or efforts to spread conspiracies and disinformation ahead of the January 6, 2021 attack on the Capitol are just two prominent examples that may come to consumers’ minds when they think of disinformation. While campaigns such as these do not have a tangible target, like a utility grid or financial database, they should be considered just as dangerous. Their threat lies in their use of intelligent algorithms to influence not just the person or entity targeted but potentially thousands of social media users engaging with their content.
At best, everyday social media users who come into contact with orchestrated disinformation efforts retain false or misleading narratives which could skew their perception of an issue or event. At worst, users engage, comment, and share these pieces of content, expanding the footprint of disinformation and making others vulnerable to false claims and inflammatory sentiment. In either case, with advanced AI-powered tools in the hands of nefarious agents, it can become impossible for consumers to know if they are interacting with bots and sock puppets spreading false information. Overtime, the lasting impact of disinformation campaigns will be revealed through shifts in consumer sentiment — influencing how people engage with everything from political opinions to financial market outlook, both online and in daily life. These efforts are not new by any means but they are definitely cause for concern and should be prioritized within national cyber defense and online analysis efforts.
Dan Brahmy is the co-founder and CEO of Cyabra, a GSA certified, SaaS platform that uses AI to measure impact and authenticity within online conversations.
The growing momentum of disinformation within cyber warfare
Deploying coordinated disinformation and influence campaigns as a means of cyber warfare is becoming an increasingly popular tactic for nefarious agents looking to...
Today, federal defense agencies and information technology professionals have many tools available to them which aid in combating cyber-attacks from foreign and domestic threats. Deploying coordinated disinformation and influence campaigns as a means of cyber warfare is becoming an increasingly popular tactic for nefarious agents looking to target the public sector. Open-source, text-generating AI models like GPT-3 are growing in popularity and capable of quickly and intelligently creating false narratives. When deployed by bad actors, these tools can expand the reach and effectiveness of disinformation campaigns. Now is the time for defense agencies to prioritize programs that can spot and react to these threats just as they do with more tangible cyber-attacks.
Taking action
More than half (53%) of U.S. adults report getting their news from social media ‘often’ or ‘sometimes,’ according to a September 2020 Pew Research Center Study. Online authors — bloggers, media outlets, social media users — are now go-to, trusted news sources for many Americans and need to be treated as such by defense professionals analyzing online sentiment and tracking the progression of false narratives. Growing trust and reliance on social media news sources is on a collision course with broader availability of tools which empower advanced, intelligent disinformation campaigns. As the public becomes increasingly desensitized to nefarious efforts online, federal defense agencies need to play a more active role in monitoring for and acting against these campaigns, particularly when they target government, defense, and public sector entities.
The defense community must integrate technology platforms that deploy AI and machine-learning algorithms capable of combating bad actors through real-time analysis and reporting on threats of nefarious influence on systems and infrastructure as well as the online communities citizens turn to for trusted information. Through these semi-supervised ML algorithms, monitoring platforms can detect disinformation, helping to flag unusual activity. Over time, the algorithm will monitor for new trends and behavior patterns as it learns from the communities it has been analyzing. It can identify initial suspicious conversations that may lead to cyberattacks or influence campaigns. These platforms may offer cluster analysis, which allows agencies to examine similarities among online communities spreading disinformation and analyze the methods and effects of these attacks to better discern their origin.
Threat to public opinion
As this type of disinformation gets out into the public more broadly, we become susceptible. We’ve seen the effect of vaccine disinformation throughout the pandemic, along with other national disinformation campaigns like election security. Sadly, U.S. citizens are all too familiar with hacks, leaks and disinformation campaigns targeting prominent figures, elections and government entities. The 2016 Wikileaks email hack of candidate Hillary Clinton, or efforts to spread conspiracies and disinformation ahead of the January 6, 2021 attack on the Capitol are just two prominent examples that may come to consumers’ minds when they think of disinformation. While campaigns such as these do not have a tangible target, like a utility grid or financial database, they should be considered just as dangerous. Their threat lies in their use of intelligent algorithms to influence not just the person or entity targeted but potentially thousands of social media users engaging with their content.
Find out how to best drive desired outcomes using artificial intelligence and automation in our new ebook, sponsored by Maximus. Download today!
At best, everyday social media users who come into contact with orchestrated disinformation efforts retain false or misleading narratives which could skew their perception of an issue or event. At worst, users engage, comment, and share these pieces of content, expanding the footprint of disinformation and making others vulnerable to false claims and inflammatory sentiment. In either case, with advanced AI-powered tools in the hands of nefarious agents, it can become impossible for consumers to know if they are interacting with bots and sock puppets spreading false information. Overtime, the lasting impact of disinformation campaigns will be revealed through shifts in consumer sentiment — influencing how people engage with everything from political opinions to financial market outlook, both online and in daily life. These efforts are not new by any means but they are definitely cause for concern and should be prioritized within national cyber defense and online analysis efforts.
Dan Brahmy is the co-founder and CEO of Cyabra, a GSA certified, SaaS platform that uses AI to measure impact and authenticity within online conversations.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Related Stories
The race for data and AI expertise in government
Impact of the new DOJ Whistleblower Pilot Program for individuals
Joni Ernst’s war on remote work ignores the data — Here it is