Industry voices urge users to adopt strong cyber hygiene amid the granting of data permissions to AI assistants.
In July 2025, Google rolled out an update for its Gemini AI assistant on Android devices, enabling it by default to interact with data from popular third-party apps — including WhatsApp.
This new capability has sparked privacy and security concerns, drawing online commentary from experts across the cybersecurity industry.
According to one spokesperson from Huntress, “this default ‘App Content’ permission fundamentally weakens Android’s security model. Granting Gemini broad access to third-party app data without explicit user consent creates a high-risk attack surface. If compromised in any way, malicious actors could exploit this pathway to harvest sensitive information, ranging from banking details to private messages.” Broad permissions and lack of clear controls could allow attackers to use AI as a gateway to other apps and data, highlighting the need for tighter app-based consent boundaries.
Check Point Software’s spokesperson has commented: “Attackers are likely to target Gemini first, as compromising it could serve as a gateway to access other applications and services on a mobile device. The potential for access to sensitive data is a significant concern; mobile devices are rich sources of personal information that users highly value. Additionally, app metadata may be collected and fed into the model without clear transparency.” Functions with over-allocated permissions could allow actions they may never have been explicitly approved by users to any installed apps.
Marc Rivero, Lead Security Researcher, Kaspersky, has also spoken out against the move: “Private messaging apps are among the most sensitive digital spaces for users as they contain intimate conversations, personal data, and potentially confidential information. Granting an AI tool automatic access to these messages without clear, explicit consent fundamentally undermines user trust. While Google says this feature is designed to make interactions more seamless, the lack of transparency and the opt-out rather than opt-in approach could potentially leave many users unaware that their private chats are being processed by an AI system.”
Until sufficient outcries succeed in putting a stop to this development, users are advised to follow some available interim methods, as well as foundation best practices in cyber- and privacy- hygiene.