ChatGPT’s latest image generator has captured the interest of multiple users worldwide with its ability to generate personal photos into Studio Ghibli-style images. However, users may overlook its potential cybersecurity risks.
Style transfer – applying a stylistic filter to a picture – is a decades-old technology, but with multimodal vision-language models, such as GPT-4o by OpenAI, it has made a big comeback, sparking a renewed interest. And ever since these style transfer apps, such as Prisma or Vinci, became popular as mobile apps with processing happening in the cloud using large neural networks, the privacy debate has been ongoing.
Conversational assistants, such as ChatGPT, due to a chat format, may give a false sense of confidentiality which we expect from private correspondence. However, using them for both work and recreational purposes, such as creating stylized portraits, is no different from using any other online service. The way they process data, and what their operators can do with the inputs that the users provide, is usually stated in their privacy policies.
While most established companies ensure the safety and security of the data they collect and store, it does not mean that the protection is bulletproof. Due to technical issues or malicious activity, data can leak and become public or appear for sale at specialized underground websites. Moreover, the account that is used to access the service can be breached if the credential or user device is compromised.
According to Kaspersky Digital Footprint intelligence experts, there are numerous posts on the dark web and hacker forums offering stolen user accounts for AI services for sale, possibly containing the history of private conversations with the chatbot.
Photos, especially portraits, are sensitive data, because they provide some information about the user that can be used by cybercriminals, for instance, to impersonate them on social media. However, photos on their own can barely be used to commit fraud – various fraudulent schemes require much more various information about the victim, such as personal information, documents etc.
Using chatbots to discuss personal matters, such as finance or health, can give cybercriminals more leverage for potential schemes, such as spear-phishing.
What you can do to be safe
To protect themselves, users should combine standard security practices with a bit of common sense. AI service accounts should be protected with strong unique passwords and, where possible, with two-factor authentication. Use a comprehensive security solution, including a password manager, to protect your devices and safeguard your accounts. Prefer established services over various proxy offerings to lower the number of parties that process your data.
Always treat a chatbot as a random stranger on the internet. Never discuss personal matters or share confidential details — both yours or your friends and relatives, especially without their consent.
Be wary of potential phishing websites harvesting credentials and spreading malware — our findings show that cybercriminals use the hype around AI to capitalize on it. More tech-savvy users can opt to use local (on-device) large language and multimodal models to process sensitive data.