AI has the potential to be a transformative technology that can significantly impact our daily lives, but appropriate regulations need to be in place to ensure ethical, responsible use and development.
We have all witnessed the wonders of generative AI, being able to automate tedious tasks and give you answers faster than Google. For good or for bad, there’s no denying how AI had transformed our lives. However, only with the appropriate restrictions and safeguards against malicious use will this tool be fit for widespread use.
“AI has already shown its potential and has the possibility to revolutionize many areas such as healthcare, finance, transportation and more. It can automate tedious tasks, increase efficiency and provide information that was previously not possible. AI could also help us solve complex problems, make better decisions, reduce human error or tackle dangerous tasks such as defusing a bomb, flying into space or exploring the oceans. But at the same time, we see massive use of AI technologies to develop cyber threats as well,” said Rebecca Law, Country Manager, Singapore at Check Point Software Technologies.
Such misuse of AI has been widely reported in the media, with select reports around ChatGPT being leveraged by cybercriminals to contribute to the creation of malware.
AI to surpass humans?
Overall, the development of AI is not just another passing craze, but it remains to be seen how much of a positive or negative impact it will have on society. And although AI has been around for a long time, 2023 will be remembered by the public as the “Year of AI”. However, there continues to be a lot of hype around this technology and some companies may be overreacting. We need to have realistic expectations and not see AI as an automatic panacea for all the world’s problems.
We often hear concerns of whether AI will approach or even surpass human capabilities. Predicting how advanced AI will be is difficult, but there are already several categories. Current AI is referred to as narrow or “weak” AI (ANI – Artificial Narrow Intelligence). General AI (AGI – Artificial General Intelligence) should function like the human brain, thinking, learning and solving tasks like a human. The last category is Artificial Super Intelligence (ASI) and is basically machines that are smarter than us.
If artificial intelligence reaches the level of AGI, there is a risk that it could act on its own and potentially become a threat to humanity. Therefore, we need to work on aligning the goals and values of AI with those of humans.
Rebecca added: “To mitigate the risks associated with advanced AI, it is important that governments, companies and regulators work together to develop robust safety mechanisms, establish ethical principles and promote transparency and accountability in AI development. Currently, there is a minimum of rules and regulations. There are proposals such as the AI Act, but none of these have been passed and essentially everything so far is governed by the ethical compasses of users and developers. Depending on the type of AI, companies that develop and release AI systems should ensure at least minimum standards such as privacy, fairness, explainability or accessibility.”
The good and the bad
Unfortunately, AI can also be used by cybercriminals to refine their attacks, automatically identify vulnerabilities, create targeted phishing campaigns, socially engineer, or create advanced malware that can change its code to better evade detection. AI can also be used to generate convincing audio and video deepfakes that can be used for political manipulation, false evidence in criminal trials, or to trick users into paying money.
But AI is also an important aid in defending against cyberattacks in particular. For example, Check Point uses more than 70 different tools to analyze threats and protect against attacks, more than 40 of which are AI-based. These technologies help with behavioural analysis, analysing large amounts of threat data from a variety of sources, including the darknet, making it easier to detect zero-day vulnerabilities or automate patching of security vulnerabilities.
“Various bans and restrictions on AI have also been discussed recently. In the case of ChatGPT, the concerns are mainly related to privacy, as we have already seen data leaks, nor is the age limit of users addressed. However, blocking similar services has only limited effect, as any slightly more savvy user can get around the ban by using a VPN, for example, and there is also a brisk trade in stolen premium accounts. The problem is that most users do not realise that the sensitive information entered into ChatGPT will be very valuable if leaked, and could be used for targeted marketing purposes. We are talking about potential social manipulation on a scale never seen before,” points out Rebecca.
The impact of AI on society will depend on how we choose to develop and use this technology. It will be important to weigh the potential benefits and risks while striving to ensure that AI is developed in a responsible, ethical and beneficial way for society.