The ant tracing a path in sand that resembles Winston Churchill raises questions about understanding and agency. Hilary Putnam posited that the ant lacks the necessary knowledge to create the image intentionally. This prompts reflections on generative AI capabilities. As AI systems become more advanced, the definitions of thinking, understanding, and knowing are becoming increasingly complex and debated. Human cognition assumes that expressing a thought signifies an understanding of it. In contrast, AI lacks real-world experience and context, leading to questions about its genuine understanding.
When we see things that speak like humans, that can do a lot of tasks like humans, write proofs and rhymes, it's very natural for us to think that the only way that thing could be doing those things is that it has a mental model of the world, the same way that humans do. We as a field are making steps trying to understand, what would it even mean for something to understand? There's definitely no consensus.
In human cognition, expression of a thought implies understanding of it. We assume that someone who says 'It's raining' knows about weather, has experienced the feeling of rain on the skin and perhaps the frustration of forgetting to pack an umbrella. For genuine understanding, you need to be kind of embedded in the world in a way that ChatGPT is not.
Today's artificial intelligence systems can seem awfully convincing. Both large language models and other types of machine learning are...
Collection
[
|
...
]