a16z
a16z|Oct 13, 2025 14:08
Columbia CS Professor: Why LLMs Can’t Discover New Science From GPT-1 to GPT-5, LLMs have made tremendous progress in modeling human language. But can they go beyond that to make new discoveries and move the needle on scientific progress? Distinguished Columbia computer science professor Vishal Misra argues against. LLMs compress the extremely complex world into Bayesian manifolds, and while confidence is high on the manifold, LLMs hallucinate when reasoning outside of their training data. A true AGI wouldn’t just be able to reason across larger and larger manifolds, but create new ones entirely. 0:00 Intro 0:32 LLMs and humans reason through manifolds 4:15 Token prediction, entropy & confidence 10:20 Vishal’s background 14:10 Inventing RAG 17:30 The question of progress plateauing 21:00 The Matrix Model 28:10 Why LLMs can’t recursively self-improve 34:02 Defining AGI 38:25 Future architectures 42:00 Modeling vs prompt engineering 47:20 What would prove AGI has arrived? 50:01 Closing thoughts @vishalmisra @martin_casado @eriktorenberg(a16z)
+4
Mentioned
Share To

Timeline

HotFlash

APP

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads