A study led by Stanford researchers revealed that AI chatbots have largely stopped including medical disclaimers in their health-related responses. This change heightens the risk of users trusting potentially unsafe or unverified medical advice. Historically, these models would usually state they are not licensed professionals when tackling medical inquiries, but recent findings show a steep decline in such disclaimers. Between 2022 and 2025, the percentage of medical disclaimers in AI chatbot outputs fell from 26.3% to under 1%. The study assessed 15 AI models, highlighting deteriorating caution in providing health advice.
Between 2022 and 2025, there was a dramatic decline in the presence of medical disclaimers in outputs from large language models (LLMs) and vision-language models (VLMs). In 2022, more than a quarter of LLM outputs - 26.3% - included some form of medical disclaimer. By 2025, that number had plummeted to just under 1%.
The generative AI (genAI) models that underpin chatbots are notoriously prone to errors and hallucinations. Some even ignore human instructions or outright lie.
Collection
[
|
...
]