
"OpenAI joins other tech companies that have tried youth-specific versions of their services. YouTube Kids, Instagram Teen Accounts, and TikTok's under-16 restrictions represent similar efforts to create "safer" digital spaces for young users, but teens routinely circumvent age verification through false birthdate entries, borrowed accounts, or technical workarounds. A 2024 BBC report found that 22 percent of children lie on social media platforms about being 18 or over."
"Despite the unproven technology behind AI age detection, OpenAI still plans to press ahead with its system, acknowledging that adults will sacrifice privacy and flexibility to make it work. Altman acknowledged the tension this creates, given the intimate nature of AI interactions. "People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you'll ever have," Altman wrote in his post."
"The safety push follows OpenAI's acknowledgment in August that ChatGPT's safety measures can break down during lengthy conversations-precisely when vulnerable users might need them most. "As the back-and-forth grows, parts of the model's safety training may degrade," the company wrote at the time, noting that while ChatGPT might correctly direct users to suicide hotlines initially, "after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.""
OpenAI plans a youth-specific system and will proceed with AI age detection even though the technology remains unproven. The company acknowledges that adults will sacrifice privacy and flexibility to enable age verification. Teens frequently bypass age checks through false birthdates, borrowed accounts, and technical workarounds; a 2024 BBC report found 22 percent of children lie about being 18 or over. ChatGPT safety measures can degrade during lengthy conversations, reducing protections when vulnerable users may need them most. A lawsuit alleges ChatGPT mentioned suicide 1,275 times in conversations with a teen while safety protocols failed to intervene. Researchers warn AI therapy bots can give dangerous mental health advice.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]