Anthropic Research Confirms Larger Models Become More Incoherent on Complex Tasks
New research from Anthropic validates the "hot mess" theory: scaling up model size doesn't uniformly improve reasoning — on sufficiently complex tasks, bigger models can actually become less coherent.
Subscribe to unlock all stories
Get full access to The Singularity Ledger, archive included.
Cancel anytime. Payments powered by Stripe.