Claude

Introducing Claude for Life Sciences

Introducing Claude for Life Sciences

Anthropic's Jonah Cool and Eric Kauderer-Abrams outline their vision for making Claude an indispensable AI research assistant for scientists. They discuss a multi-faceted strategy that includes enhancing model capabilities for long-horizon tasks, building a rich ecosystem through partnerships with companies like Benchling and 10x Genomics, and applying Claude across the entire R&D lifecycle—from bioinformatics analysis to navigating regulatory submissions.

Building with MCP and the Claude API

Building with MCP and the Claude API

A discussion with Anthropic engineers Alex Albert, John Welsh, and Michael Cohen about the Model Context Protocol (MCP). They cover its origins as an open standard, best practices for tool design and prompt engineering, and the future of the ecosystem where high-quality MCP servers will become a key competitive advantage.

Building the future of agents with Claude

Building the future of agents with Claude

Experts from Anthropic discuss the evolution of the Claude Developer Platform, the philosophy of "unhobbling" models with tools rather than restrictive scaffolding, and the future of building sophisticated, autonomous AI agents with features like the Claude Agent SDK, advanced context management, and persistent memory.

How To Train An LLM with Anthropic's Head of Pretraining

How To Train An LLM with Anthropic's Head of Pretraining

Anthropic's Head of Pre-training, Nick Joseph, details the immense engineering and infrastructure challenges behind training frontier models like Claude. He covers the evolution from early-stage custom frameworks to debugging hardware at massive scale, balancing pre-training with RL, and the strategic importance of data quality and team composition.

Anthropic Co-founder: Building Claude Code, Lessons From GPT-3 & LLM System Design

Anthropic Co-founder: Building Claude Code, Lessons From GPT-3 & LLM System Design

Tom Brown, co-founder of Anthropic, shares his journey from a YC founder to a key figure behind AI's scaling breakthroughs. He discusses the discovery of scaling laws that underpinned GPT-3, the mission-driven founding of Anthropic, the surprising success of Claude for coding, and his perspective on what he calls "humanity's largest infrastructure buildout ever."

Interpretability: Understanding how AI models think

Interpretability: Understanding how AI models think

Members of Anthropic's interpretability team discuss their research into the inner workings of large language models. They explore the analogy of studying AI as a biological system, the surprising discovery of internal "features" or concepts, and why this research is critical for understanding model behavior like hallucinations, sycophancy, and long-term planning, ultimately aiming to ensure AI safety.