Exclusive: Meta fixes bug that could leak users' AI prompts and generated content
Briefly

Meta has addressed a significant security flaw that enabled users of its AI chatbot to access the private prompts and responses of others. Sandeep Hodkasia, a security researcher, reported the bug to Meta in December 2024 and received a $10,000 bounty for his discovery. The vulnerability occurred when users edited their AI prompts, leading to incorrect authorization checks by Meta's servers. Although no evidence of malicious exploitation was found, this incident raised serious concerns regarding user privacy and data protection in AI applications, particularly as competition in the AI sector intensifies.
Meta has addressed a security vulnerability that allowed users to access private prompts and AI-generated responses of others, revealing major concerns with data authorization.
The security bug was identified when a user edited their AI prompt, which led to exposure of prompts and responses by manipulating the unique identifier assigned by Meta's servers.
Read at TechCrunch
[
|
]