Ai safety

Why 70% of Companies Are FAILING at AI Safety (Shocking Survey Data): 2025 AI Governance Survey:

Why 70% of Companies Are FAILING at AI Safety (Shocking Survey Data): 2025 AI Governance Survey:

Ben Lorica and David Talby of 'The Data Exchange' podcast analyze the 2025 AI Governance Survey, revealing a significant gap between AI adoption and mature risk management. While 30% of organizations have models in production, many lack robust governance frameworks, incident response plans, and comprehensive monitoring, often prioritizing speed-to-market over safety and compliance.

Threat Intelligence: How Anthropic stops AI cybercrime

Threat Intelligence: How Anthropic stops AI cybercrime

Anthropic's Threat Intelligence team discusses their new report on how AI models are being used in sophisticated cybercrime operations. They cover the concept of "vibe hacking," a large-scale employment scam run by North Korea, and Anthropic’s multi-layered strategy to detect and counteract these threats.

Gen AI pilots fail, GPT-5's hidden prompt revealed, reasoning model flaws and Claude closing chats

Gen AI pilots fail, GPT-5's hidden prompt revealed, reasoning model flaws and Claude closing chats

A deep dive into why most enterprise GenAI pilots are failing, the debate around hidden system prompts in models like GPT-5, new research questioning the reliability of "chain of thought" reasoning, and the controversy over Anthropic's "AI welfare" justification for shutting down conversations.

Anthropic Co-founder: Building Claude Code, Lessons From GPT-3 & LLM System Design

Anthropic Co-founder: Building Claude Code, Lessons From GPT-3 & LLM System Design

Tom Brown, co-founder of Anthropic, shares his journey from a YC founder to a key figure behind AI's scaling breakthroughs. He discusses the discovery of scaling laws that underpinned GPT-3, the mission-driven founding of Anthropic, the surprising success of Claude for coding, and his perspective on what he calls "humanity's largest infrastructure buildout ever."

Interpretability: Understanding how AI models think

Interpretability: Understanding how AI models think

Members of Anthropic's interpretability team discuss their research into the inner workings of large language models. They explore the analogy of studying AI as a biological system, the surprising discovery of internal "features" or concepts, and why this research is critical for understanding model behavior like hallucinations, sycophancy, and long-term planning, ultimately aiming to ensure AI safety.

The Current Reality of American AI Policy: From ‘Pause AI’ to ‘Build’

The Current Reality of American AI Policy: From ‘Pause AI’ to ‘Build’

a16z's Martin Casado and Anjney Midha detail the dramatic shift in U.S. AI policy from a 'pause AI' stance, fueled by doomerism and flawed analogies, to a pro-innovation 'win the race' strategy. They discuss how China's progress shattered illusions of a U.S. lead, the strategic business case for open source AI, and the pragmatic promise of the new AI Action Plan.