In a world of generative AI such as GPT-4, organizations in Asia Pacific should be gearing up to combat the growing potentials of disinformation threats.
Blackbird.AI, a narrative and risk intelligence solution used by Fortune 500 organizations and governments, has recently launched its regional HQ in Singapore.
Considered the fastest and highest-resolution early warning system that provides actionable insights to help organizations make informed decisions against harmful online threats in real-time, BlackBird.AI’s entry into the region means APAC organizations across public and private sectors will be able to access technology that empowers them to tackle disinformation threats more effectively.
This is timely as public skepticism and fear are on the rise, as reflected by the rapid increase in search interest related to misinformation and fake news, across APAC markets as well as rising concerns around technology such as ChatGPT to proliferate fake news.
CybersecAsia had the privilege to gather more insights into the rising tide of AI, disinformation and misinformation from Wasim Khaled, CEO and co-founder, Blackbird.Ai.
Wasim has consulted and advised government agencies and Fortune 500 companies on the risks and mitigation of escalating information warfare. A computer scientist by trade, he has studied information operations, computational propaganda, behavioral science, AI and the applications of these combined disciplines to defense, cyber and risk intelligence.
Why is disinformation and misinformation a growing cyberthreat that leaders of organizations in APAC need to pay attention to?
Wasim Khaled: Misinformation and disinformation are among the biggest threats facing organizations today. In fact, 87% of business leaders agree that the spread of disinformation is one of the greatest reputational risks to businesses today, costing the global economy billions of dollars every year.
Disinformation for hire is becoming a booming business and the problem is only going to grow. In the past decade, organizations have focused on building their brand’s reputation – and now, they’ll need to work hard to maintain it. This includes implementing strategies to fight misinformation, disinformation and fake news as well as gaining a clear understanding of the drivers behind certain disinformation campaigns. It will not only help brands steer clear of this digital minefield, but will also help safeguard decades of goodwill.
What do these threat actors want and what is in it for them?
Wasim Khaled: Disinformation is often used to polarize and manipulate messaging on hot-button topics such as climate change or vaccination, with the intent to influence social perception and consumer behavior.
The goals here are diverse, but broadly aimed at increased public scrutiny, influencing perception and eroding confidence at scale. The motivations behind them are typically financial, political, or even personal. This may result in an attack against a company or on a sector or even as simple as an ingredient in a supply chain. Sometimes, it comes from fringe networks and websites such as 4chan and 8chan, while at other times it could be a nation-state that is trying to shift public perception against a particular government and its policies.
What advice do you have for companies wanting to detect and prevent disinformation?
Wasim Khaled: Risk and communications teams need to be able to identify toxic narratives before it surfaces and know what to do with the information as part of their mitigation strategy. Speed to insight is key, especially when there is a potential crisis looming. Often, teams of analysts would spend 800 hours a week reading 200,000 words a day without ever seeing invisible threats like bot activity and anomalous activity.
Comparatively, tapping on advanced technology such as AI can help reduce tedious, manual tasks, and rapidly improve response times. For instance, Blackbird’s Constellation Dashboard uses AI at scale to process billions of posts across social media, news and the dark web to extract five categories of signals that act as a ranking mechanism for harmful information. These five risk signals are:
- Narratives, which are evolving conversations or storylines
- Networks, which are relationships between users and the concepts they share
- Cohorts, which are communities of like-minded individuals or tribes
- Manipulation, which can distinguish between authentic and inauthentic behavior
- Deception, which are the active hoaxes that can impact user perceptions
We process all five signals in tandem and in real-time to generate a composite risk index that can protect and predict harmful narratives before they can damage an organization or manipulate public perception. This access to real-time insights enables organizations to take immediate action in combating any potential threats online.
Once a brand understands the source of its problem, it can then identify how online threats are being spread both on a surface level and under the radar. By understanding the pattern, the company can swiftly identify the channels used to target specific audiences and where they need to focus their efforts so that they can get in front of the misinformation more quickly and effectively. Instead of adopting the wait and see approach, brands can now take on a more proactive stance against reputational and financial risk.
Why was Blackbird.AI founded and how does it work?
Wasim Khaled: Current media monitoring tools that were built to support marketing and customer support teams are really for social listening and are insufficient to identify the root sources and drivers of disinformation-based threats. In the words of a client, current methods are like bringing a knife to a gunfight. Organizations need technology that is purpose-built to understand new risks in an evolving digital media ecosystem.
Blackbird.AI is the world’s first commercially available AI-driven risk intelligence solution to effectively manage and resolve harmful online threats and reputational risk organizations face. It offers new, scalable technologies, empowering public and private sector organizations in APAC to quickly and effectively tackle regional disinformation. Essentially, our solution transforms reactive crisis management business to a proactive resilience practice and serves as an insurance policy against reputational and financial risk.
In the cybersecurity world of “AI versus AI”, what are some things we need to be aware of and concerned about with regards to AI?
Wasim Khaled: With the rise of generative AI technologies like GPT-3, Dall-E, and deep fakes, it is becoming increasingly difficult to tell what is genuine and what is manufactured. The recent ChatGPT craze has generated much buzz in the internet sphere – and just like any other technological tool, it can be a catalyst for good and evil.
For example, AI systems like ChatGPT could potentially be used to manipulate public opinion and erode trust in institutions and organizations by generating and disseminating large amounts of false, but convincing-sounding information quickly.
On the other hand, we can also tap onto generative AI capabilities to better combat disinformation. For instance, Blackbird.AI has just announced RAV3N Copilot, a generative AI powered solution for Narrative Intelligence and Rapid Risk Reporting that enables unparalleled workflow automation during mission-critical crisis scenarios.
The RAV3N Copilot powered by Blackbird’s Constellation Risk Engine combined with the company’s generative AI large language model will become a transformative must-have for corporate and threat intelligence professionals, force-multiplying their talents and enabling them to get more done in critical, time-sensitive scenarios than ever before. With its introduction, the unparalleled insights surfaced by Blackbird’s Constellation Platform can be directly utilized to auto-generate executive briefings, key findings and even mitigation steps, freeing up teams to focus their time on leveraging their subject matter expertise.
To navigate this new landscape, brands need to identify emerging risks within narratives such as toxic language, hate speech and bot behaviors, as well as build heat map communities of like-minded actors that are driving adversarial engagement across a topic or organization.