Large language models

Conext Engineering for Engineers

Conext Engineering for Engineers

Jeff Huber of Chroma argues that building reliable AI systems hinges on 'Context Engineering'—the deliberate curation of information within the context window. He challenges the efficacy of long-context models, presenting a 'Gather and Glean' framework to maximize recall and precision, and discusses specific challenges and techniques for AI agents, such as intelligent compaction.

Aaron Levie and Steven Sinofsky on the AI-Worker Future

Aaron Levie and Steven Sinofsky on the AI-Worker Future

Experts from a16z, Box, and Microsoft debate the definition and future of AI agents. They explore the shift from monolithic AGI to specialized agent networks, the technical challenges of autonomous systems, and how this new platform will reshape enterprise software, workflows, and the very nature of work.

The Moonshot Podcast Deep Dive: Jeff Dean on Google Brain’s Early Days

The Moonshot Podcast Deep Dive: Jeff Dean on Google Brain’s Early Days

Google DeepMind’s Chief Scientist Jeff Dean discusses the origins of his work on scaling neural networks, the founding of the Google Brain team, the technical breakthroughs that enabled training massive models, the development of TensorFlow and TPUs, and his perspective on the evolution and future of artificial intelligence.

Anthropic Co-founder: Building Claude Code, Lessons From GPT-3 & LLM System Design

Anthropic Co-founder: Building Claude Code, Lessons From GPT-3 & LLM System Design

Tom Brown, co-founder of Anthropic, shares his journey from a YC founder to a key figure behind AI's scaling breakthroughs. He discusses the discovery of scaling laws that underpinned GPT-3, the mission-driven founding of Anthropic, the surprising success of Claude for coding, and his perspective on what he calls "humanity's largest infrastructure buildout ever."

AGI progress, surprising breakthroughs, and the road ahead — the OpenAI Podcast Ep. 5

AGI progress, surprising breakthroughs, and the road ahead — the OpenAI Podcast Ep. 5

OpenAI's Chief Scientist Jakub Pachocki and researcher Szymon Sidor discuss the rapid progress towards AGI, focusing on the shift from traditional benchmarks to real-world capabilities like automating scientific discovery. They share insights into recent breakthroughs in mathematical and programmatic reasoning, highlighted by successes in competitions like the International Math Olympiad (IMO), and explore what's next for scaling and long-horizon problem-solving.

#define AI Engineer - Greg Brockman, OpenAI (ft. Jensen Huang, NVIDIA)

#define AI Engineer - Greg Brockman, OpenAI (ft. Jensen Huang, NVIDIA)

Greg Brockman discusses his journey from a math enthusiast to a programmer, his early days scaling Stripe, and the core philosophies that drive OpenAI. He covers the critical partnership between research and engineering, the future of coding with agentic systems, and the immense infrastructure and algorithmic challenges on the path to AGI.