OpenAI introduces new 'Trusted Contact' safeguard for cases of possible self-harm | TechCrunch
Briefly

OpenAI introduces new 'Trusted Contact' safeguard for cases of possible self-harm | TechCrunch
"OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user."
"OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company's system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. "We strive to review these safety notifications in under one hour," the company says."
"If OpenAI's internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert - either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user's privacy, the company says."
"The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens' accounts, including receiving safety notifications designed to alert the parent if OpenAI's system believes their child is facing a "serious safety risk." For "
Trusted Contact lets an adult ChatGPT user name a trusted third party, such as a friend or family member, in their account. When conversation content includes self-harm mentions, the system encourages the user to reach out to that contact. If the internal safety team determines the situation is a serious safety risk, an automated alert is sent to the trusted contact via email, text message, or in-app notification. The alert is brief and prompts the contact to check in, without sharing detailed information about what was discussed, to protect privacy. The feature builds on earlier safeguards that provided parents oversight and safety notifications for teens.
Read at TechCrunch
Unable to calculate read time
[
|
]