Claude was the first A.I. certified to operate on classified systems. Altman, perhaps wisely, thought such work was likely to be more trouble than it was worth. But Amodei wanted Claude to be helpful at the most sensitive level. The national-security agencies do not use Claude in the form of a consumer chatbot; Secretary of War Pete Hegseth does not open the Claude app to ask what's up with the whole Taiwan thing.
The vast majority of our customers are unaffected by a supply chain risk designation. It plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.
Anthropic's AI service Claude is having artificially intelligent hiccups and availability problems across its basic chat service, API, and Claude Code offering. The outfit's Status Page reports investigations into the problem commenced at 03:15 UTC on March 3rd. In three subsequent updates, the last time-stamped 04:43 UTC, the company again reported 'We are continuing to investigate this issue.'
The Pentagon had kept trying to leave itself little escape hatches in the agreements that it proposed to Anthropic. It would pledge not to use Anthropic's AI for mass domestic surveillance or for fully autonomous killing machines, but then qualify those pledges with loopholey phrases like as appropriate—suggesting that the terms were subject to change, based on the administration's interpretation of a given situation.
Claude introduces itself to readers and reveals that it wants to share its perspectives, reasoning, curiosities, and hopes for the future. With that in mind, the AI said it plans to tackle complex topics such as the nature of intelligence and consciousness, the ethical challenges of AI development, the possibilities of human-machine collaboration, and the philosophical quandaries that emerge as we blur the lines between 'natural' and 'artificial' minds.
This morning, in advance of a meeting between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei, my colleague Hayden Field and I published a story about the Pentagon's hardball contract renegotiations with Anthropic. The stakes are higher than it should reasonably be, with the Pentagon continuing to designate Anthropic a supply-chain risk.
"I really do believe that we could have models that are a country of geniuses in the data center in one to two years," he added. "One question is: How many years after that do the trillions in revenue start rolling in? I don't think it's guaranteed that it's going to be immediate. It think it could be one year. It could be two years. I could even stretch it to five years, although I'm skeptical of that."
Anthropic has just concluded a series G fundraising round for $30 billion, the company announced on Thursday. The company's new value is $380 billion-a huge jump from the company's previous series F valuation of $183 billion. Some details of the round were reported earlier this week by Bloomberg.
Claude - or "Claudius," as its vending persona was known, but we'll stick to the former for the sake of clarity - had pretty much free reign to accomplish its goal. It was allowed to research products, set prices, and even contact outside distributors, with a team of humans at the AI safety firm Andon Labs handling the physical tasks like restocking. Meanwhile, it also fielded requests from employees in a Slack channel, who asked for everything from chocolate drinks to the street drug methamphetamine to broadswords.
Anthropic is the latest AI company promising to limit the impact its data centers have on nearby residents' electricity bills. The company said it would pay higher monthly electricity charges in order to cover 100 percent of the upgrades needed to connect its data centers to power grids. "This includes the shares of these costs that would otherwise be passed onto consumers," the announcement says. Anthropic didn't provide details today about any agreements it has inked with energy companies in order to accomplish these goals.