top of page

Google Introduces On-Device AI Photo Scanning in Messages App, Sparking Privacy Concerns



Google Rolls Out AI-Based Sensitive Content Detection in Messages


Google has initiated a controversial new step in its AI content moderation strategy by enabling photo scanning technology within its Google Messages app, starting with blurring of nude images and issuing warnings about explicit content. This change follows earlier uproar over the silent installation of the SafetyCore framework on Android devices, which many users interpreted as intrusive surveillance.


Back when SafetyCore was first discovered, Google emphasized it was merely a framework—an “on-device infrastructure” designed to support content classification without scanning anything by default. According to Google, SafetyCore only functions when a specific app feature is enabled by the user and ensures content processing remains on-device, not transmitted externally.


AI Monitoring Is On—But Still Local (For Now)


The new implementation confirms the start of this scanning capability. While Google reassures users that images are analyzed locally—without transmitting data back to the company—this hasn't dispelled privacy concerns. Independent platforms like GrapheneOS have affirmed that SafetyCore does not perform client-side scanning for reporting purposes and supports private, local content classification.


However, GrapheneOS also raised a transparency issue: “It’s unfortunate that it’s not open source... We’d have no problem with local neural network features, but they need to be open.”


Settings and Controls: What Users Can Expect


  • The feature is currently enabled by default for children, but disabled for adults.

  • Adults can manually enable it via Messages Settings > Protection & Safety > Manage Sensitive Content Warnings.

  • Children's settings are governed through Family Link or account-level parental controls.


As highlighted by Phone Arena, the scanning also applies when sending content: users are alerted if they attempt to share flagged imagery, adding an extra layer of caution.


The Bigger Picture: Privacy, Legislation, and Control


While this update focuses on Google Messages, privacy experts warn that this is just the beginning. As governments worldwide apply pressure to weaken encrypted systems, these new tools could become footholds for broader surveillance capabilities—despite their on-device nature.


Google’s vast user base—over 3 billion users—now faces a critical question: How much AI monitoring are they willing to accept in exchange for security and content moderation?


With AI content scanning becoming the norm across Gmail, Photos, and Messages, users must tread carefully in this evolving digital landscape—balancing convenience, protection, and the right to privacy.




Google has introduced a new feature in its Messages app called "Sensitive Content Warnings," designed to detect and blur images containing nudity. This feature aims to protect users from unsolicited explicit content and prevent accidental sharing. It operates entirely on-device through a system service known as SafetyCore, ensuring that no image data is transmitted to Google's servers. For adult users, the feature is optional and can be enabled in the app's settings, while it is enabled by default for users under 18. ​Source


SafetyCore, the underlying service enabling this functionality, has been silently installed on Android devices running version 9 or higher since October 2024. Google states that SafetyCore provides on-device infrastructure for securely and privately performing content classification to help users detect unwanted content. However, the silent installation of SafetyCore without explicit user consent has raised privacy concerns among users and experts. ​


While Google assures that all processing occurs locally on the device, the lack of transparency regarding SafetyCore's installation and operation has drawn criticism. Privacy advocates argue that without open-source access or clear documentation, it's challenging to verify the scope and limitations of such features, raising fears of potential misuse or expansion beyond their original intent. Source

The Sensitive Content Warnings feature functions by blurring images that may contain nudity and providing users with options to view the content, learn about potential harms, or block the sender. Additionally, when a user attempts to send or forward an image that might be considered sensitive, the app issues a warning, prompting the user to confirm before proceeding. Source

In conclusion, while Google's introduction of on-device AI scanning for sensitive content in its Messages app aims to enhance user safety, the silent deployment of the underlying SafetyCore service without explicit user consent has sparked privacy concerns. As AI-driven content moderation tools become more prevalent, balancing user safety with transparency and privacy rights remains a critical challenge.

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page