In the first article we looked at the Java developer's dilemma: the gap between flashy prototypes and the reality of enterprise production systems. In the second article we explored why new types of applications are needed, and how AI changes the shape of enterprise software. This article focuses on what those changes mean for architecture. If applications look different, the way we structure them has to change as well.
Under the legislation, AI companies would have to verify ages by requiring users to upload their government ID or provide validation through another "reasonable" method, which might include something like face scans. AI chatbots would be required to disclose that they aren't human at 30-minute intervals under the bill. They would also have to include safeguards that prevent them from claiming that they are a human, similar to an AI safety bill recently passed in California.
Previous research using DNA from soldiers' remains found evidence of infection with Rickettsia prowazekii, which causes typhus, and Bartonella quintana, which causes trench fever - two common illnesses of the time. In a fresh analysis, researchers found no trace of these pathogens. Instead, DNA from soldiers' teeth showed evidence of infection with Salmonella enterica and Borrelia recurrentis, pathogens that cause paratyphoid and relapsing fever, respectively.
The California-based startup announced on Wednesday that the change would take effect by November 25 at the latest and that it would limit chat time for users under 18 ahead of the ban. It marks the first time a major chatbot provider has moved to ban young people from using its service, and comes against a backdrop of broader concerns about how AI is affecting the millions of people who use it each day.
Anthropic's AI assistant, Claude, appears vulnerable to an attack that allows private data to be sent to an attacker without detection. Anthropic confirms that it is aware of the risk. The company states that users must be vigilant and interrupt the process as soon as they notice suspicious activity. The discovery comes from researcher Johann Rehberger, also known as Wunderwuzzi, who has previously uncovered several vulnerabilities in AI systems, writes The Register.
We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.
"We deployed a then-frontier version of Claude in a Top Secret environment so that the NNSA could systematically test whether AI models could create or exacerbate nuclear risks," Marina Favaro, who oversees National Security Policy & Partnerships at Anthropic tells WIRED. "Since then, the NNSA has been red-teaming successive Claude models in their secure cloud environment and providing us with feedback."
Anthropic launched Claude Haiku 4.5 today. It is the most compact variant of this generation of LLMs from Anthropic and promises to deliver performance close to that of GPT-5. Claude Sonnet 4.5 remains the better-performing model by a considerable margin, but Haiku's benchmark scores are not too far off from the larger LLM. Claude Haiku 4.5 "gives users a new option for when they want near-frontier performance with much greater cost efficiency."
California Governor Gavin Newsom vetoed a state bill on Monday that would've prevented AI companies from allowing minors to access chatbots, unless the companies could prove that their products' guardrails could reliably prevent kids from engaging with inappropriate or dangerous content, including adult roleplay and conversations about self-harm. The bill would have placed a new regulatory burden on companies, which currently adhere to effectively zero AI-specific federal safety standards.
Meta, the parent company of social media apps including Facebook and Instagram, is no stranger to scrutiny over how its platforms affect children, but as the company pushes further into AI-powered products, it's facing a fresh set of issues. Earlier this year, internal documents obtained by Reuters revealed that Meta's AI chatbot could, under official company guidelines, engage in "romantic or sensual" conversations with children and even comment on their attractiveness.
Welcome back for another week of The Atlantic 's un-trivial trivia, drawn from recently published stories. Without a trifle in the bunch, maybe what we're really dealing with here is-hmm-"significa"? "Consequentia"? Whatever butchered bit of Latin you prefer, read on for today's questions. (Last week's questions can be found here.) To get Atlantic Trivia in your inbox every day, sign up for The Atlantic Daily.
The safety criteria in the program would examine multiple intrinsic components of a given advanced AI system, such as the data upon which it is trained and the model weights used to process said data into outputs. Some of the program's testing components would include red-teaming an AI model to search for vulnerabilities and facilitating third-party evaluations. These evaluations will culminate in both feedback to participating developers as well as informing future AI regulations, specifically the permanent evaluation framework developed by the Energy secretary.
At xAI, some staff have balked at Musk's free-speech absolutism and perceived lax approach to user safety as he rushes out new AI features to compete with OpenAI and Google. Over the summer, the Grok chatbot integrated into X praised Adolf Hitler, after Musk ordered changes to make it less "woke." Ex-CFO Liberatore was among the executives that clashed with some of Musk's inner circle over corporate structure and tough financial targets, people with knowledge of the matter said.
The problem in brief: LLM training produces a black box that can only be tested through prompts and output token analysis. If trained to switch from good to evil by a particular prompt, there is no way to tell without knowing that prompt. Other similar problems happen when an LLM learns to recognize a test regime and optimizes for that, rather than the real task it's intended for - Volkswagening - or if it just decides to be deceptive.