With all nascent benefits, AI has the potential to be abused, misused by human folly and hubris in the long run

Tech billionaire Elon Musk has famously said: “The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are. This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea—which is fundamentally flawed.”

Musk noted how an AI gaming system, AlphaGo Zero, had learned to beat humans at the game of Go simply by deep learning from thousands of archived game tactics. The quirky billionaire insisted that he is “not normally an advocate of regulation and oversight—but this is a case where you have a very serious danger to the public… And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight?”

While the benefits of AI currently outweigh the risks, if complacency and hubris prevent the world from regulating intentional and unintentional abuse and misuse, could Musk’s dire predictions play out for real?

Let us review unintentional AI mishaps that have already occurred.

10 scary incidents and trends of AI dangers

Remember that Flash Crash of 2010? Here is a recap: A London-based trader had used a ‘spoofing’ algorithm to purchase thousands of stock index futures contracts betting that the market would fall. Instead of going through with the ‘shorting’ spree, he was going to cancel at the last second and buy the lower priced stocks that were being sold off due to his stunt. This in turn caused High Frequency Trading AI algorithms to launch one of the biggest stock sell-offs in history, causing a brief loss of more than US$1tn worldwide.

Other trends of note include:

  1. HFT algorithms in one trading firm had mistakenly executed 4m trades of 397m shares in only 45 minutes. The volatility created by this computer error led to the firm losing US$460m overnight.
  2. Autonomous military technology allows soldiers to engage with the enemy without being physically present at the battle zone. However, if and when hackers can get into such AI systems to turn the weapons against their owners, the results really would be a nightmare. Furthermore, since every country has military AI research and development capabilities, an arms race around poisoning “super-intelligent” military technology could already be taking place. Even official regulations dictating ethical AI development will not apply at national defence levels, you think?
  3. Even Pope Francis has ever warned that AI has been used to circulate tendentious opinions and false data that could poison public debates and even manipulate the opinions of millions of people, to the point of endangering the very institutions that guarantee peaceful civil coexistence. If mankind’s so-called technological progress were to become an enemy of the common good this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.”
  4. AI can never be completely objective, according to top data scientists. One of them, Timnit Gebru, had noted that the root of AI bias is social rather than technological. She considers AI developers as “some of the most dangerous people in the world, because we have this illusion of objectivity.”
  5. Deepfakes and AI-driven bot attacks are already well-known and well feared. At the summit of cybercriminal leverage of AI technology, major critical infrastructures and economic systems can be made to collapse and create a domino effect rippling through the globe.
  6. With ubiquitous surveillance becoming commonplace, especially in government-sanctioned use cases, there is Murphy’s possibility that the data can be abused for human rights and privacy-rights violations and much more.
  7. Humans, perennially faced with unsolvable problems, can become addicted to the lures of AI automation benefits and be afflicted with the delusion called Techno-solutionism. Various experts and environmental groups have also come forward with their concerns over views and approaches that look for techno fixes as solutions and warn that those would be “misguided, unjust, profoundly arrogant and endlessly dangerous” approaches.
  8. In the book, “Heartificial Intelligence: Embracing Humanity and Maximizing Machines,” the author provides anecdotal evidence that AI’s role in job creation, disruption and losses is less rosy than proponents claim. The fields of Law and Accounting are ripe for AI automation that will make human auditors, accountants and paralegal professionals unnecessary.
  9. In the AI100 report by Stanford University’s Institute for Human-Centered Artificial Intelligence, it is stated that “AI systems are being used in the service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. From deepfake videos to online bots manipulating public discourse by feigning consensus and spreading fake news, there is the danger of AI systems undermining social trust. The technology can be co-opted by criminals, rogue states, ideological extremists, or simply special interest groups, to manipulate people for economic gain or political advantage. Disinformation poses serious threats to society, as it effectively changes and manipulates evidence to create social feedback loops that undermine any sense of objective truth. The debates about what is real quickly evolve into debates about who gets to decide what is real, resulting in renegotiations of power structures that often serve entrenched interests.
  10. Similarly, the growing use of AI in healthcare and medical research settings has proven benefits, but the AI100 report cites how the “exact long-term effects of algorithms in healthcare are unknown, their potential for bias replication means any advancement they produce for the population in aggregate—from diagnosis to resource distribution—may come at the expense of the most vulnerable.” New AI algorithms shown to be better than doctors at diagnosing disease have turned out to be biased by data sets used for deep learning.

The 100-year study on AI concludes that preconceived human notions of AI can feed “a dangerous illusion of technological determinism. This is what Rensselaer Polytechnic Professor Langdon Winner calls “technological somnambulism”, the notion that people approach technology passively and apathetically (as if sleepwalking) and are unable to recognize the larger implications of technology, which can ultimately lead to ‘unmindful’ decisions and unintended consequences.