Governments and privacy activists are racing to catch up with the emerging threats posed by generative AI and virtual reality technologies.
Due to the emergence of new technologies and evolving regulatory practices, the privacy landscape is undergoing transformation.
According to researchers at Kaspersky, major events in 2023 in the technological, social, economic and political spheres are shaping the privacy scene for 2024.
Anna Larkina, a security and privacy expert at the firm, has said: “In the era of evolving technologies, the notion of private data must extend beyond traditional boundaries. The advent of AI-enabled wearables, augmented reality/virtual reality developments, and the rise of AI bots necessitate a broader understanding of privacy. As these innovations become integral to daily life, our concept of personal data must encompass not only what we willingly share, but (also include) the nuanced interactions and insights these technologies inherently possess.”
As such, here are four main factors that the firm predicts will affect the privacy landscape in 2024.
-
AI-enabled wearables may spark a fresh debate on privacy
While people have embraced devices like smartphones and smart assistants in their homes, wearables — especially those with cameras like smart glasses or AI pins — tend to solicit more suspicion. Assuming they gain sufficient popularity, the overt nature of these devices could genuinely cause concern to privacy-conscious individuals. -
Privacy issues with Augmented Reality and Virtual Reality
With the increasing integration of the two technologies into advanced applications, discussions regarding privacy, especially when it comes to technologies that have not been regulated properly yet — such as Mixed Reality (MR) where 3D digital content is made to appear to be spatially aware and responsive. For example, a user privacy is at risk because such technologies can “see” what the users are doing, collect a lot of information about their behavior, to a much greater extent than, for example, social media networks or other forms of technology. If hackers gain access to an AR/VR/MR device, the potential loss of privacy is huge. Also, how do virtual reality content firms use and secure the information they have gathered from users? Where do companies store the data: locally on the device or in the cloud? If the information is sent to a cloud, is it encrypted? Do they share this data with third parties? If so, what disclosures do they need to make to users? -
Smart bots may be doing more than help us
The growing prevalence of generative AI bots and “virtual assistants” utilizing natural language processing (NLP) are getting so advanced and earning so much consumer trust that we may take for granted that the information they mine can be breached/abused. Conversely, a sophisticated bot assistant could also be used to seamlessly handle user calls, ensuring sensitive information such as the user’s voice is protected. Regardless of the pros and cons of smart bots, cybercriminals and fraudsters have already started to employ “passive listening” in smartphone applications and other tools to spy on users and steal personal information. More regulation will need to be imposed on privacy violations and protection in the technology world in 2024 and beyond. -
Passkeys and advanced authentication systems will make leaked passwords less monetizable
The primary reason for the fear of leaking passwords and other login credentials is that the information can be abused to break into various accounts and contact lists to steal financial assets and proliferate scams. In 2023, the increase in adoption of passkeys, zero trust systems and multi-stage biometric challenge schemes was already picking up momentum. This year, expect password-based authentication to lose further ground.