Terence Tao on why LLMs work, why they fail, and why humans aren't obsolete yet

The Fields Medalist offered a rare, technically grounded assessment of LLM capabilities — arguing that the math underlying these systems is surprisingly simple, that hallucinations are structurally inevitable, and that humans retain a durable edge in learning from limited examples.

Subscribe to unlock all stories

Get full access to The Singularity Ledger, archive included.

Cancel anytime. Payments powered by Stripe.