Does anyone care about GenAI data privacy?
SCWorld.com reported that “Meta AI was ranked worst for data privacy among nine AI platforms assessed by Incogni, according to a report published Tuesday. Mistral AI’s Le Chat was deemed the most privacy-friendly generative AI (GenAI) platform, followed closely by OpenAI’s ChatGPT.” The June 24, 2025 report entitled “Meta scores worst on GenAI data privacy ranking” (https://tinyurl.com/5d9yc539) included these comments “OpenAI ranked No. 1 for transparency”:
OpenAI ranked best in terms of making it clear whether prompts are used for training, making it easy to find information on how models are trained and providing a readable privacy policy. Inflection AI scored worst in this category.
Researchers noted that information on whether prompts were used for training was easily accessible through a search or clearly presented in the privacy policies for OpenAI, Mistral AI, Anthropic and xAI, which were ranked top one through four in the transparency category, respectively.
By contrast, researchers had to “dig” through the Microsoft and Meta websites to find this information and found it even more difficult to discover this information within the privacy policies Google, DeepSeek and Pi AI, the report stated. The information provided by these latter three companies was often “ambiguous or otherwise convoluted,” according to Incogni.
The readability of each company’s privacy policy was assessed using the Dale-Chall readability formula, with researchers determining that all of the privacy policies required a college-graduate reading level to understand.
While OpenAI, Anthropic and xAI were noted to make heavy use of support articles to present more convenient and “digestible” information outside of their privacy policies, Inflection AI and DeepSeek were criticized for having “barebones” privacy policies, and Meta, Microsoft and Google failed to provide dedicated AI privacy policies outside of their general policies across all products.
What do you think?