Scaling laws

Reid Hoffman on AI, Consciousness, and the Future of Humanity

Reid Hoffman on AI, Consciousness, and the Future of Humanity

Reid Hoffman explores the future of AI, moving beyond obvious productivity applications to tackle grand challenges in science and industry. He discusses the current limitations of LLMs in reasoning, the distinction between augmenting and replacing human experts, the philosophical questions of consciousness, and the enduring power of human connection in the age of AI.

Building an AI Physicist: ChatGPT Co-Creator’s Next Venture

Building an AI Physicist: ChatGPT Co-Creator’s Next Venture

Former researchers from OpenAI and Google DeepMind, Liam Fedus and Ekin Dogus Cubuk, discuss their new venture, Periodic Labs. They aim to create an 'AI physicist' by integrating large language models with real-world, iterative experiments, moving beyond simulation to solve fundamental challenges in physics and chemistry, starting with high-temperature superconductivity.

How To Train An LLM with Anthropic's Head of Pretraining

How To Train An LLM with Anthropic's Head of Pretraining

Anthropic's Head of Pre-training, Nick Joseph, details the immense engineering and infrastructure challenges behind training frontier models like Claude. He covers the evolution from early-stage custom frameworks to debugging hardware at massive scale, balancing pre-training with RL, and the strategic importance of data quality and team composition.

The Moonshot Podcast Deep Dive: Jeff Dean on Google Brain’s Early Days

The Moonshot Podcast Deep Dive: Jeff Dean on Google Brain’s Early Days

Google DeepMind’s Chief Scientist Jeff Dean discusses the origins of his work on scaling neural networks, the founding of the Google Brain team, the technical breakthroughs that enabled training massive models, the development of TensorFlow and TPUs, and his perspective on the evolution and future of artificial intelligence.

Anthropic Co-founder: Building Claude Code, Lessons From GPT-3 & LLM System Design

Anthropic Co-founder: Building Claude Code, Lessons From GPT-3 & LLM System Design

Tom Brown, co-founder of Anthropic, shares his journey from a YC founder to a key figure behind AI's scaling breakthroughs. He discusses the discovery of scaling laws that underpinned GPT-3, the mission-driven founding of Anthropic, the surprising success of Claude for coding, and his perspective on what he calls "humanity's largest infrastructure buildout ever."

Scaling and the Road to Human-Level AI | Anthropic Co-founder Jared Kaplan

Scaling and the Road to Human-Level AI | Anthropic Co-founder Jared Kaplan

Jared Kaplan, co-founder of Anthropic, explains how the discovery of predictable, physics-like scaling laws in AI training provides a clear roadmap for progress. He details the two main phases of model training (pre-training and RL), discusses how scaling compute predictably unlocks longer-horizon task capabilities, and outlines the remaining challenges—memory, nuanced oversight, and organizational knowledge—on the path to human-level AI.