Meta, the parent company of Facebook, Instagram, and WhatsApp, has made headlines again with a controversial move that raises serious concerns about privacy and data ethics. According to a recent report by Heise, Meta is updating its privacy policy to explicitly allow user data—including public posts, comments, and interactions—to be used for training its generative AI models.
This news comes in the wake of broader privacy concerns tied to Meta’s messaging platforms, particularly WhatsApp. A deeper dive into how WhatsApp handles private data can be found in this post, which outlines how vague language and shifting terms make it difficult for users to truly understand what data is being collected—and how it’s used.
The implications of this new AI training policy are troubling. Meta’s ability to vacuum up user data and feed it into AI systems doesn’t just raise privacy issues; it highlights a deeper risk we rarely acknowledge: the illusion of safety when talking to or through AI-powered systems.