Reasoning

No Priors Ep. 138 | The Best of 2025 (So Far) with Sarah Guo and Elad Gil

No Priors Ep. 138 | The Best of 2025 (So Far) with Sarah Guo and Elad Gil

A recap of key conversations from the No Priors podcast in 2025, featuring insights from leaders at OpenAI, Harvey, and the Center for AI Safety on topics ranging from reasoning models and spatial intelligence to the geopolitical risks of superintelligence and the human impact of AI in healthcare.

AI Agents + LLM Reasoning: Transforming Autonomous Workflows

AI Agents + LLM Reasoning: Transforming Autonomous Workflows

Explore the distinction between LLMs and AI agents, focusing on how agents leverage reasoning, tool calling, and the ReAct prompting framework for autonomous decision-making and task execution in complex business workflows.

Columbia CS Professor: Why LLMs Can’t Discover New Science

Columbia CS Professor: Why LLMs Can’t Discover New Science

Professor Vishal Misra of Columbia University introduces a formal model for understanding Large Language Models (LLMs) based on information theory. He explains how LLMs reason by navigating "Bayesian manifolds", using concepts like token entropy to explain the mechanics of chain-of-thought, and defines true AGI as the ability to create new manifolds rather than just exploring existing ones.

29.4% ARC-AGI-2 🤯 (TOP SCORE!) - Jeremy Berman

29.4% ARC-AGI-2 🤯 (TOP SCORE!) - Jeremy Berman

Jeremy Berman, winner of the ARC-AGI v2 public leaderboard, discusses his novel evolutionary approach that refines natural language descriptions instead of code. He explores the idea of building AI that synthesizes new knowledge by constructing deductive "knowledge trees" rather than merely compressing data into "knowledge webs," touching on the fundamental challenges of reasoning, continual learning, and creativity in current models.

From Vibe Coding to Vibe Researching: OpenAI’s Mark Chen and Jakub Pachocki

From Vibe Coding to Vibe Researching: OpenAI’s Mark Chen and Jakub Pachocki

OpenAI’s Chief Scientist, Jakub Pachocki, and Chief Research Officer, Mark Chen, discuss the research behind GPT-5, the push toward long-horizon reasoning, and the grand vision of an automated researcher. They cover how OpenAI evaluates progress beyond saturated benchmarks, the surprising durability of reinforcement learning, and the culture required to protect fundamental research while shipping world-class products.

AGI progress, surprising breakthroughs, and the road ahead — the OpenAI Podcast Ep. 5

AGI progress, surprising breakthroughs, and the road ahead — the OpenAI Podcast Ep. 5

OpenAI's Chief Scientist Jakub Pachocki and researcher Szymon Sidor discuss the rapid progress towards AGI, focusing on the shift from traditional benchmarks to real-world capabilities like automating scientific discovery. They share insights into recent breakthroughs in mathematical and programmatic reasoning, highlighted by successes in competitions like the International Math Olympiad (IMO), and explore what's next for scaling and long-horizon problem-solving.