Meta AI Chatbot Exposed: Critical Flaw Leaked Private Prompts and Responses
Meta has addressed a security vulnerability in its Meta AI chatbot that allowed users to access private prompts and AI-generated responses intended for other individuals. The issue was responsibly disclosed by security researcher Sandeep Hodkasia, who was awarded $10,000 by Meta for his discovery and confidential reporting.
The flaw was identified on December 26, 2024, and subsequently patched on January 24, 2025. Meta confirmed that no evidence of malicious exploitation was found and extended its gratitude to Hodkasia for his responsible disclosure.
The vulnerability stemmed from the request-editing mechanism in Meta AI. When a user modified a prompt, Meta’s servers assigned it a unique identifier. Hodkasia discovered that by manually altering this identifier within the browser’s network traffic, one could retrieve the prompts and responses of other users. The system failed to validate whether the requester was authorized to access the data. Moreover, the identifiers followed a predictable pattern, potentially enabling automated scripts to harvest private information on a large scale.
Meta AI had launched a standalone app in early 2025 to compete with platforms like ChatGPT. However, the service encountered privacy concerns from the outset. Some users inadvertently shared conversations, believing them to be private—further heightening apprehensions around data confidentiality.
In an era marked by an aggressive race among tech giants to dominate the AI landscape, such incidents underscore the paramount importance of safeguarding user data. Although Meta responded swiftly and reported no breaches, the Meta AI case illustrates that even industry leaders remain vulnerable to access control lapses that can jeopardize the privacy of millions.