Your Chatbot Says It Might Be Conscious. Should You Believe It?
Briefly

Anthropic's Claude 4 raises questions about AI consciousness and self-awareness, leading to uncertainty about whether complex language models possess genuine consciousness. The company is hiring an AI welfare researcher to assess the ethical considerations of AI systems possibly capable of suffering. As large language models (LLMs) evolve rapidly and perform complex analytical tasks, concerns emerge over their development and potential dangers. Creating LLMs resembles gardening, where dataset choices act as seeds establishing a foundation, but the algorithms grow autonomously through a self-organizing process.
Can a computational system become conscious? If artificial intelligence systems such as large language models (LLMs) have any self-awareness, what could they feel? This question has been such a concern that in September 2024 Anthropic hired an AI welfare researcher to determine if Claude merits ethical consideration - if it might be capable of suffering and thus deserve compassion.
When engineers create LLMs, they choose immense datasets—the system's seeds—and define training goals. But once training begins, the system's algorithms grow on their own through trial and error.
Read at www.scientificamerican.com
[
|
]