Ai safety

Moltbot (Clawdbot): Open-source agents go mainstream

Moltbot (Clawdbot): Open-source agents go mainstream

The panel discusses the rise of Moltbot, a community-driven open-source AI agent, and the debate it sparks around vertical vs. horizontal integration and security. They analyze Anthropic CEO Dario Amodei's essay on AI's "adolescence," exploring the growing pains of the technology, the pace mismatch between innovation and safety, and the need for broader societal engagement. The conversation also covers IBM's GRAMMY IQ, an AI-powered fan experience, and the strategic implications of Microsoft's Maia 200 chip, signaling a shift toward vertical integration in the AI hardware space to challenge NVIDIA's dominance.

OpenAI Town Hall with Sam Altman

OpenAI Town Hall with Sam Altman

Sam Altman discusses the future of AI, covering the evolution of software engineering, the challenges for AI startups, the roadmap for model capabilities and costs, and the broader societal impacts on economics, security, and education.

If You Can't See Inside, How Do You Know It's THINKING? [Dr. Jeff Beck]

If You Can't See Inside, How Do You Know It's THINKING? [Dr. Jeff Beck]

Dr. Jeff Beck explores the philosophical and technical definitions of agency, arguing that the distinction between an agent and an object lies in computational sophistication, particularly the capacity for planning and counterfactual reasoning. The conversation provides a deep dive into Energy-Based Models (EBMs), Yann LeCun's JEPA for learning in latent space, and a pragmatic approach to AI safety centered on inverse reinforcement learning rather than fears of rogue superintelligence.

How to Make AI Forget

How to Make AI Forget

Ben Luria, CEO of Hirundo, discusses the critical need for machine unlearning, framing it as a form of "AI neuro-surgery" for enterprise AI. He explains how this technique directly modifies model weights to remove unwanted data and behaviors, addressing core risks that superficial solutions like guardrails cannot solve.

Structured Dissent Patterns for Agentic Production Reliability

Structured Dissent Patterns for Agentic Production Reliability

This talk introduces 'structured dissent,' a multi-agent orchestration pattern where believer, skeptic, and neutral agents debate decisions to overcome the 'confidently wrong' failure mode of single-agent LLM systems, improving reliability for high-stakes tasks like cybersecurity analysis.

The arrival of AGI | Shane Legg (co-founder of DeepMind)

The arrival of AGI | Shane Legg (co-founder of DeepMind)

Shane Legg, Chief AGI Scientist at Google DeepMind, outlines his framework for AGI, predicting 'minimal AGI' within years and 'full AGI' within a decade. He details a path to more reliable systems and introduces 'System 2 Safety' for building ethical AI. Legg issues an urgent call for society to prepare for the massive economic and structural transformations that advanced AI will inevitably bring.