Anthropic

Anthropic Economic Index, Virtual Agent Economies, AlterEgo and How People Use ChatGPT

Anthropic Economic Index, Virtual Agent Economies, AlterEgo and How People Use ChatGPT

A discussion on a new report detailing how people use ChatGPT, the global AI adoption trends from Anthropic's Economic Index, the future of AI agent economies, and the practicality of emerging AI wearables like AlterEgo and Meta's smart glasses.

Google Antitrust, Anthropic's $183B leap and are we in the AI winter?

Google Antitrust, Anthropic's $183B leap and are we in the AI winter?

Experts discuss the Google antitrust verdict's impact on agentic AI, Anthropic's high valuation driven by its coding prowess, and whether the discourse around GPT-5 signals an "AI winter" or a necessary market reality check.

From Spikes to Stories: AI-Augmented Troubleshooting in the Network Wild // Shraddha Yeole

From Spikes to Stories: AI-Augmented Troubleshooting in the Network Wild // Shraddha Yeole

Shraddha Yeole from Cisco ThousandEyes explains how they are transforming network observability by moving from complex dashboards to AI-augmented storytelling. The session details their use of an LLM-powered agent to interpret vast telemetry data, accelerate fault isolation, and improve MTTR, covering the technical architecture, advanced prompt engineering techniques, evaluation strategies, and key challenges.

Anthropic Co-founder: Building Claude Code, Lessons From GPT-3 & LLM System Design

Anthropic Co-founder: Building Claude Code, Lessons From GPT-3 & LLM System Design

Tom Brown, co-founder of Anthropic, shares his journey from a YC founder to a key figure behind AI's scaling breakthroughs. He discusses the discovery of scaling laws that underpinned GPT-3, the mission-driven founding of Anthropic, the surprising success of Claude for coding, and his perspective on what he calls "humanity's largest infrastructure buildout ever."

Interpretability: Understanding how AI models think

Interpretability: Understanding how AI models think

Members of Anthropic's interpretability team discuss their research into the inner workings of large language models. They explore the analogy of studying AI as a biological system, the surprising discovery of internal "features" or concepts, and why this research is critical for understanding model behavior like hallucinations, sycophancy, and long-term planning, ultimately aiming to ensure AI safety.

OpenAI dropped GPT-5, is AGI here?

OpenAI dropped GPT-5, is AGI here?

In this analysis, experts Bryan Casey, Mihai Criveti, and Chris Hay dissect the OpenAI GPT-5 release, comparing its capabilities against Anthropic's Claude Opus 4.1. While GPT-5 introduces significant improvements in accessibility, agentic capabilities, and reliability, the consensus is that it does not yet dethrone Claude as the daily driver for developers due to key differences in user experience and workflow management.