Meta Patches AI Security Flaw That Could Leak Your Private Chatbot Prompts

by · HotHardware

Meta has fixed a flaw in its AI that exposed users' personal information. Cybersecurity expert Sandeep Hodkasia discovered this vulnerability on December 26, 2024, and disclosed it to Meta. The company found a fix on January 24, 2025, and has paid Hodkasia $10,000 as a bug bounty reward.

The flaw impacts how logged-in users can edit AI prompts on Meta AI. If you're logged into Meta AI, you can change the questions or prompts you gave to the AI, and it will generate new text or images based on your edits. To do this, Meta AI assigns a unique number to each prompt and response when you edit it. However, Hodkasia found that these numbers are not unique, and can be guessed easily. If threat actors successfully predict your assigned number, Meta AI would respond with your prompt and AI-generated responses.

In a world where many people enter personal data into AI chatbots (date of birth, name, location, sensitive information, etc.), this can be very dangerous. Hackers could exploit the data from Meta's private chatbot prompts to blackmail individuals or sell to scammers for use in cyberattacks, for example.

Thankfully, Meta has found a fix for this issue, and the danger is no longer in the wild. This incident is a reminder that even the most advanced AI tools can still be exploited. Moreover, you can never be too sure about the extent to which companies collect and use your data. As always, you're safer when you keep personal or confidential information offline, as much as possible.