Entropy

Columbia CS Professor: Why LLMs Can’t Discover New Science

Columbia CS Professor: Why LLMs Can’t Discover New Science

Professor Vishal Misra of Columbia University introduces a formal model for understanding Large Language Models (LLMs) based on information theory. He explains how LLMs reason by navigating "Bayesian manifolds", using concepts like token entropy to explain the mechanics of chain-of-thought, and defines true AGI as the ability to create new manifolds rather than just exploring existing ones.