Ai safety

Sam Altman on Sora, Energy, and Building an AI Empire

Sam Altman on Sora, Energy, and Building an AI Empire

Sam Altman discusses OpenAI's strategy, the path to AGI through world models like Sora, the importance of societal co-evolution with AI, and the massive infrastructure and energy requirements for future models. He covers topics from AI safety and regulation to monetization and the future of scientific discovery driven by AI.

Ex-DeepMind: How To Actually Protect Your Data From AI

Ex-DeepMind: How To Actually Protect Your Data From AI

Dr. Ilia Shumailov, former DeepMind AI Security Researcher, explains why traditional security fails for AI agents. He details the unique threat model of agents, the dangers of supply chain attacks and architectural backdoors, and proposes a system-level solution called CAML to enforce security policies by design, separating model reasoning from data execution.

How To Train An LLM with Anthropic's Head of Pretraining

How To Train An LLM with Anthropic's Head of Pretraining

Anthropic's Head of Pre-training, Nick Joseph, details the immense engineering and infrastructure challenges behind training frontier models like Claude. He covers the evolution from early-stage custom frameworks to debugging hardware at massive scale, balancing pre-training with RL, and the strategic importance of data quality and team composition.

NVIDIA’s USD 100bn investment and Google's AP2

NVIDIA’s USD 100bn investment and Google's AP2

The panel discusses NVIDIA's $100 billion investment in OpenAI, analyzing the trend towards vertically integrated AI 'tribes'. They also explore the rise of specialized open-source models like Tongyi DeepResearch, Google's new AP2 agent protocol for secure e-commerce, the ongoing debate on AI existential risk, and Apple's practical approach to wearable AI with the new real-time translation feature in AirPods.

Designing AI Agents for the Complex Realities of Healthcare

Designing AI Agents for the Complex Realities of Healthcare

Dr. Sarah Gebauer presents a clinical framework for deploying AI agents in healthcare, drawing a powerful analogy between AI agents and medical residents. She outlines the critical risks, validation strategies, and post-deployment monitoring required to make agents useful, safe, and credible in high-stakes clinical environments.

Why 70% of Companies Are FAILING at AI Safety (Shocking Survey Data): 2025 AI Governance Survey:

Why 70% of Companies Are FAILING at AI Safety (Shocking Survey Data): 2025 AI Governance Survey:

Ben Lorica and David Talby of 'The Data Exchange' podcast analyze the 2025 AI Governance Survey, revealing a significant gap between AI adoption and mature risk management. While 30% of organizations have models in production, many lack robust governance frameworks, incident response plans, and comprehensive monitoring, often prioritizing speed-to-market over safety and compliance.