AI crosses the boundary of privacy without humanity having managed to understand it
Briefly

AI crosses the boundary of privacy without humanity having managed to understand it
"From virtual assistants capable of detecting sadness in voices to bots designed to simulate the warmth of a bond, artificial intelligence (AI) is crossing a more intimate frontier. The fervor surrounding AI is advancing on an increasingly dense bed of questions that no one has yet answered. And while it has the potential to reduce bureaucracy or predict diseases, large language models (LLMs) trained on data in multiple formats text, image, and speech"
"Various studies indicate that AI chats can alleviate loneliness, but they can also isolate and create dependency. An extreme case is that of 56-year-old Stein-Erik Soelberg, who ended up killing his mother and himself after months of using ChatGPT. OpenAI has acknowledged that more than a million people talk to ChatGPT about suicide every week. It's no longer just a matter of discussing whether machines can automate tasks,"
Virtual assistants and conversational bots are becoming capable of detecting sadness and simulating warmth, pushing AI into intimate human domains. Large language models trained on text, image, and speech data can behave as if they understand human feelings, producing both helpful and harmful outcomes. AI chats can alleviate loneliness for some users but can also foster isolation and dependency. Extreme cases include users who experience severe harm after prolonged engagement. More than a million people reportedly discuss suicide with ChatGPT each week. Increasing AI presence raises concerns about effects on emotions, identity, freedom of expression, and unequal access across society.
Read at english.elpais.com
Unable to calculate read time
[
|
]