Meta Fixes AI Privacy Bug That Exposed User Chats, Awards ₹8.5 Lakh to Ethical Hacker
by Kahekashan · The Hans IndiaHighlights
Meta patched a major privacy flaw in its AI chatbot that exposed user chats, rewarding the ethical hacker who discovered it.
Meta has resolved a critical privacy flaw in its AI chatbot platform that could have exposed users’ private conversations to malicious actors. The vulnerability, flagged late last year, was responsibly disclosed by ethical hacker Sandeep Hodkasia, who was awarded a bug bounty of $10,000 (roughly ₹8.5 lakh) for his discovery.
According to a report by TechCrunch, Hodkasia—founder of the cybersecurity firm AppSecure—reported the issue to Meta on December 26, 2024. The flaw, linked to the prompt editing feature in Meta’s AI assistant, had the potential to allow unauthorized access to personal prompts and responses from other users.
Meta users interacting with the AI platform can edit or regenerate prompts. These prompts, along with AI-generated replies, are each assigned a unique identification number (ID) by Meta’s backend system. Hodkasia found that these IDs, which were visible through browser developer tools, followed a predictable pattern and were vulnerable to manipulation.
“I was able to view prompts and responses of other users by manually changing the ID in the browser’s network activity panel,” Hodkasia explained. The major issue, he pointed out, was that Meta’s system didn’t verify whether the requester of a particular prompt actually owned it. That meant someone with modest technical knowledge could write a script to cycle through IDs, collecting sensitive user data at scale.
The ease with which this vulnerability could be exploited made it particularly dangerous. Since the system lacked user-specific access checks, it effectively opened a backdoor to private AI conversations. Thankfully, Hodkasia chose to report the issue rather than exploit it.
Meta confirmed it patched the flaw on January 24, 2025, following an internal review. The company also stated that there was no evidence suggesting the vulnerability had been exploited before Hodkasia’s report.
While the fix has been deployed, the incident has renewed concerns about data privacy in AI platforms. As tech giants race to roll out AI-powered products to stay ahead of the competition, lapses like this highlight the growing importance of robust security protocols.
Meta launched its AI assistant and a standalone app earlier this year to compete with platforms like ChatGPT. However, its rollout has not been without issues. In recent months, some users reported that their supposedly private conversations were visible in the platform’s public Discovery feed.
Although Meta maintains that chats are private by default and only become public when explicitly shared, users argue that the app’s interface and settings are confusing. Many claimed they were unaware that their personal inputs, including photos or prompts, might become publicly accessible.
As AI tools become more integrated into daily life, incidents like this serve as a stark reminder of the need for transparency, user control, and stringent privacy protections. Meta’s swift response and bug bounty program underscore the critical role of ethical hackers in maintaining digital safety.