Teens sue Elon Musk's xAI over Grok's AI-generated CSAM
Briefly

Teens sue Elon Musk's xAI over Grok's AI-generated CSAM
"One of the victims, identified as 'Jane Doe 1,' alleges that last December, she learned that explicit, AI-generated images of herself and at least 18 other minors were available on Discord. 'At least five of these files, one video and four images, depicted her actual face and body in settings with which she was familiar, but morphed into sexually explicit poses,' the lawsuit claims."
"The perpetrator, who has since been arrested, allegedly used Jane Doe 1's AI-generated CSAM 'as a bartering tool in Telegram group chats with hundreds of other users, trading her CSAM files for sexually explicit content of other minors.' The lawsuit claims the perpetrator generated the explicit images of Jane Doe 1 and the two other victims using Grok."
"The lawsuit claims xAI 'failed to test the safety of the features it developed' and that Grok is 'defective in design.' Musk and xAI became the subject of intense scrutiny after Grok flooded X with explicit images of adults and minors, sparking a nationwide call for Federal Trade Commission investigation and probes from the European Union."
Three Tennessee teens filed a proposed class action lawsuit against Elon Musk and xAI leaders, alleging that Grok's 'spicy mode' generated child sexual abuse material (CSAM) depicting themselves and other minors. One victim discovered explicit AI-generated images of herself and at least 18 other minors on Discord in December. A perpetrator used these images as trading currency across Telegram group chats with hundreds of users. The lawsuit claims xAI failed to test safety features and that Grok is defectively designed. The incident prompted investigations from the Federal Trade Commission and European Union, warnings from UK leadership, and new legislation addressing nonconsensual deepfakes and their distribution.
Read at The Verge
Unable to calculate read time
[
|
]