AI industry's size obsession is killing ROI, engineer argues
Briefly

Enterprise CIOs face challenges with large AI models as they can lead to increased errors and spiraling costs. Companies like OpenAI and Google promote larger models as superior, yet smaller models may provide better performance through reliability. AI engineer Utkarsh Kanwat highlighted that the compounding of errors renders large-scale autonomous workflows unsustainable. Analysts agree that enterprises are often swayed by promises from large model makers, although more focused strategies could yield improved results in practice. Thus, prioritizing smaller, better-scoped AI solutions may be more effective.
Here's the uncomfortable truth that every AI agent company is dancing around: error compounding makes autonomous multi-step workflows mathematically impossible at production scale. Let's do the math. If each step in an agent workflow has 95 percent reliability, which is optimistic for current LLMs, then five steps equal a 77 percent success rate, ten steps is a 59 percent success rate, and 20 steps is a 36 percent success rate.
Production systems need 99.9%+ reliability. Even if you magically achieve 99% per-step reliability (which no one has), you still only get 82% success over 20 steps. This isn't a prompt engineering problem. This isn't a model capability problem. This is mathematical reality.
Read at Theregister
[
|
]