Would blockchain-based timestamping provide the authentication urgently required for verifying the credibility of content disseminated via social media and messaging platforms?
Fake news, disinformation, and misinformation. We’ve been dealing with them more and more the past few years, with the rise of social media and messaging platforms as the fastest and biggest source of information sharing.
In 2019, Twitter bought a UK-based AI startup to help it fight fake news proliferating on its platform, using graph deep learning technology. But, due to their viral nature, not many social media and messaging platforms have been able to contain the spread of misinformation and disinformation over the years, making Gartner’s 2017 prediction – that, by 2022, most people in mature economies will consume more false information than true information mainly via social media platforms – possibly come true.
Distributed ledger technologies – such as blockchain – enable privacy, security, and trust via a decentralized peer-to-peer (P2P) network without any central managing authority. How would that help with ascertaining the credibility of content, creators, sources, date and time?
CybersecAsia sought some clarifications and insights from Frank van Dalen, Partner, Wordproof.
What are the most susceptible channels for the dissemination of fake news and misinformation? What are some examples of how this happens?
Frank van Dalen (FvD): The most susceptible channels for the dissemination of fake news and misinformation are often messaging platforms like WhatsApp, Telegram, and Facebook Messenger, as well as social media platforms like Facebook and Twitter. Information is easily recirculated by users on these platforms, oftentimes without first verifying or authenticating the information they’ve received. It’s an endless cycle whereby users unknowingly spread misinformation to others who do the same, resulting in paramount reach of fake news.
In Singapore, the older generations have been found to be more susceptible to fake news online especially through social media platforms where algorithms recommend content based on users’ interest. The spread of misinformation over messaging platforms has been concerning as well, given that it has also led to serious consequences. For instance, in Singapore, a 65-year-retiree fell violently ill and was hospitalized after taking Ivermectin in the belief that it would protect her against COVID-19. A recent check by The Sunday Times also found at least 17 Telegram groups spreading Covid-19 misinformation, with numbers in the groups ranging from 1,000 to 14,000 users. These numbers show the considerable reach of these messaging platforms and the ease in which misinformation spreads.
Within Asia, where misinformation runs rampant as people turn to social media as their main source of news, we’ve seen cases where fake news has incited violence within countries such as in the cases of Myanmar and Sri Lanka. In both instances, the spread of fake news incited violence against the Rohingya minority in Myanmar and caused communal violence between Sri Lanka’s Buddhist majority and Muslim minority. Misinformation has inflamed major political events within countries in the region as well, such as the elections in Indonesia, for example.
To combat the propagation of misinformation and fake news, what are some technologies that social media and news platforms can leverage?
FvD: Social media and news platforms can leverage technologies such as timestamping to combat fake news and misinformation on their platforms. Timestamping is built on blockchain technology, and the decentralized nature of this ensures accountability and transparency on all sides. Through timestamping technologies such as the WordProof Timestamp Ecosystem, publishers add an additional layer of trust over their content. It not only allows publishers to claim ownership over their content but also grants transparency over alterations, building a safer and more trustworthy internet.
Within timestamping, there are also tier levels which play a crucial role within the technology. When the reputation of content creators are tied to a timestamp, publishers are then able to put forward only the most trusted content at the forefront of their websites. This helps control what information is presented to readers, thus influencing the viral impact of the information.
How should governments, social media networks and content providers work together to prevent the spread of fake news and misinformation?
FvD: Governments can take action by making conscious efforts to debunk misinformation and increase awareness surrounding fake news. A great example of this is what the Singapore government has done with the spread of pandemic-related misinformation. On their official government website, they compiled fake news and clarifications from affected organizations, debunking these for the general public. This gave Singaporeans a trusted source of truth amidst the influx of noise.
But what might prove to be most useful in the fight against misinformation and fake news is encouraging the adoption of the right technologies. Plugins and timestamps, for instance, ensure the credibility of content and its creators, allowing consumers to easily differentiate credible and trustworthy sources from fake news sources.
The next steps, then, for publishers and social media networks would be to integrate plugins and timestamps into their search engine algorithms and platforms. By taking a more holistic approach through timestamp implementation, any information that is shared and has the potential to go viral is accounted for and has transparent sources. Ultimately, this limits the spread of misinformation and increases the layer of trust over the Internet.
On top of this, social media networks can follow in the footsteps of Twitter and spotlight credible information, allowing for these to be easily found on their platforms. Enhancing misinformation policies can also make all the difference. For example, clearly stating a ban on information with no credible source, and information that contradicts well-known, established research and sources, can drastically reduce the spread.
For publishers, building more third-party fact-checking websites across the region can help ensure the authenticity of news received. Putting an end to misinformation is not a fight meant for one party alone. Collective action is needed from publishers, social media networks and governments to prevent the continuous spread of fake news and misinformation.
With the integration of tier levels on social media and news platforms, consumers would then be able to refuse to view content below a certain tier level, moving the onus onto these platforms to ensure all content reaches the tier level accepted by consumers. Through the integration of tier levels and timestamps, the damaging viral effect of fake news, tempered or fraudulent information can be avoided.
What can users of social media and consumers of online content do to avoid becoming unwitting propagators of misinformation?
FvD: To avoid becoming unwitting propagators of misinformation, users and consumers should ensure that their sources are credible and that they make fact-checking across multiple sources a habit.
They should also prioritize platforms, search engines and e-commerce environments that use technology like timestamps in the blockchain when fact-checking. Timestamps help enforce accountability and transparency on all sides, including that of users and news consumers.
What if a usually reliable source (governments or state-owned media or paid/disguised editorials in large news media) issues fake news (for political or economic purposes)? How does timestamping help?
FvD: Timestamps should not be used to limit freedom of speech. However, freedom of speech does not mean freedom of reach. Timestamps help to hold sources of publications accountable.
In the foreseeable future, reputation will become part of the evaluation mechanisms of content and actors on the internet. When reliable sources begin losing their reputation because they push fake news or other forms of misinformation, this will impact their tier level. To obtain reach via social media platforms and search engines, higher tier levels are needed. The score of the tier level will also be made visible in the timestamp certificate such that it is easy to read and available for the general audience directly via the publication.
Currently, reputation mechanisms are being developed so that public opinion plays a part in establishing reputation levels. To secure validity, this mechanism is also backed by timestamps which reinforces accountability and transparency for review mechanisms.
Cybercriminals are also manipulating timestamps or Chronos attacks, and research has found that 41% of financial institutions had observed the manipulation of timestamps to alter the value of capital or trades. Does this apply to fake news too?
There are several forms of fraud:
- Adjusting timestamps in a centralized database – since blockchain is decentralized this kind of fraud is not feasible.
- Adjusting timestamps (publication date) of web content that’s already been published: WordProof uses real-time automated timestamp mechanisms where the actual time is included in the timestamp. The publication date is stored in the hash. If someone were to alter the metadata in the database of the website, this will reflect on the screen as the publication date, and would be recognized, raising red flags as to the authenticity of the publication date.
- Altering a timestamped document where the date is part of the metadata of said document will change the hash and would raise a red flag as to the authenticity of the timestamp.
While timestamp manipulation could occur with fake news, the decentralized nature of blockchain, coupled with the real-time automated timestamp mechanisms Wordproof employs would make such manipulation attempts easily identifiable. Timestamp manipulation within fake news may not see much use either, given that people are already recirculating fake news without first verifying or authenticating the sources of these news. The rush of being the first to share interesting information and having one’s beliefs reinforced by fake news is enough for people to hit the share button on fake news, regardless of the authenticity of such news.
However, with the implementation of tier levels in the foreseeable future, particularly if the European Union (EU) follows up on GDPR regarding the trusted web setting global standards, search engines and social media platforms will be able to filter content that is shown to users. Web-users could also use browser-filters, which prevent, or at the very least, warn them when the publications or websites they are visiting are below a certain tier level threshold.