How can we evaluate every morsel of so-called “information” in our online activities — in an Internet overrun by AI-powered information manipulation?
As seen in recent years, AI-driven misinformation and disinformation campaigns have been intensified amid geopolitical tensions.
The cost of running an AI-powered disinformation campaign is relatively low compared to the massive societal impact it can have. A typical disinformation campaign follows four key stages: reconnaissance, content creation, amplification, and actualization.
Unlike traditional cyberattacks driven by financial motives, the goal of these campaigns is not to profit, but to influence how people think and act (including their voting decisions).
Countering the information war
In the past, the main response to misinformation and disinformation campaigns was through traditional “debunking”. However, debunking often made the problems worse by unintentionally amplifying the claims being debunked. That is because debunking requires the ‘debunker’ to repeat the false information.
Now, we are seeing the rise of prebunking, a proactive approach that involves informing the public (or specific target groups) about upcoming disinformation/misinformation efforts.
An example of effective prebunking occurred in Ukraine, where the government released credible evidence of an impending Russian attack. However, by releasing this information to the public ahead of time, the outcome was that this prebunking had actually led to the planned attack being thwarted.
To effectively prebunk information manipulation campaigns in general, it is essential to lay the groundwork with six key pieces of information:
- The current situation: explain the context
- Types of likely-false information: identify the false information likely to be spread
- The prebunking strategy: detail the steps being taken to counteract the questionable claims
- The types of content to expect: educate the public on the forms disinformation may take (i.e., deep fakes, manipulated images, etc.)
- The motive(s) behind the campaign explain why the misinformation/disinformation is being spread
- The reaction the bad actors hope to get: anticipate the emotional and psychological triggers that the campaigners seek to exploit
Preparing the public with this information can help individuals recognize and resist psychological manipulation. People should ask themselves: Why am I sharing this? Who wants me to share this? What message is this sending? Educating even a small segment of the population can significantly impact the outcome of such influence campaigns.
Business people who feel that their business may be targeted by misinformation or disinformation campaigns, can consider moving beyond the current paradigm of checking for indicators of compromise, and expand your scope into mis/disinformation.
Cross-functional teams can cover the most ground within a business. Additionally, since such campaigns often target an entire industry, organizations within the same industry can share resources to identify and prebunk the campaigns or align in defense of an ongoing campaign.
The need for a post-trust approach to truth
In today’s world where trust in information sources is increasingly fractured, we need a post-trust approach to truth.
Many mainstream media outlets are already trying this by providing enhanced fact checking and transparent references. This model mirrors the Zero Trust approach to cybersecurity.