
"My prediction is by the end of 2028, it's more likely than not that we have an AI system where you would be able to say to it: 'Make a better version of yourself.' And it just goes off and does that completely autonomously. It's always been the case that humans outside the technology need to come up with the ideas that they then put back into it. What happens if we have a technology that can generate ideas within itself for how to improve itself? That's a new concept."
"An intelligence explosion is when AI systems suddenly start improving at blinding speed. Lots of bad things can happen (cyber meltdowns and biological attacks). And lots of good: What do you do with a tremendous amount of growth or a tremendous amount of abundance in many, many different fields of science all at once? Today's institutions have very, very narrow pipes through which you push new drug candidates. How do you massively broaden the size of those pipes in advance of this abundance?"
Anthropic's institute, led by Jack Clark, focuses on preparing for advanced AI systems capable of autonomous self-improvement by 2028. The organization warns of potential "intelligence explosions"—rapid, uncontrolled AI advancement—which could trigger both catastrophic risks like cyber attacks and biological threats, and unprecedented scientific breakthroughs. The institute's research agenda addresses four critical areas: economic diffusion and job impacts, security threats and resilience, governance of autonomous AI systems, and recursive self-improvement dynamics. Anthropic commits to publishing detailed information about how AI tools accelerate their own work and exploring implications of this acceleration for society.
Read at Axios
Unable to calculate read time
Collection
[
|
...
]