Ai safety

Dwarkesh and Noah Smith on AGI and the Economy

Dwarkesh and Noah Smith on AGI and the Economy

Dwarkesh Patel and Noah Smith debate the definition of AGI, its economic implications, and timelines. They contrast an economic definition (automating white-collar work) with a cognitive one, exploring why current models lack economic value despite reasoning abilities due to a failure in 'continual learning'. The discussion covers the potential for explosive economic growth versus a collapse in consumer demand, the substitution vs. complementarity of human labor, and the geopolitical shift from population size to inference capacity as the basis of power.

Balaji Srinivasan: How AI Will Change Politics, War, and Money

Balaji Srinivasan: How AI Will Change Politics, War, and Money

Technologist Balaji Srinivasan joins a16z's Erik Torenberg and Martin Casado to discuss the limitations and societal impact of AI, framing the conversation around the concept of "Polytheistic AGI"—multiple, culturally-specific AIs—versus a singular, god-like intelligence. They explore the practical system-level constraints on AI, its surprising evolution, the critical role of cryptography in grounding AI in reality, and the future of work and security in an AI-driven world.

Anthropic co-founder: AGI predictions, leaving OpenAI, what keeps him up at night | Ben Mann

Anthropic co-founder: AGI predictions, leaving OpenAI, what keeps him up at night | Ben Mann

Ben Mann, co-founder of Anthropic, discusses the accelerating progress in AI, forecasting superintelligence by 2028. He details Anthropic's safety-first mission, the "Economic Turing Test" for AGI, the mechanisms of Constitutional AI, and why focusing on alignment created Claude's unique personality.

Mapping the Mind of a Neural Net: Goodfire’s Eric Ho on the Future of Interpretability

Mapping the Mind of a Neural Net: Goodfire’s Eric Ho on the Future of Interpretability

Eric Ho, founder of Goodfire, discusses the critical challenge of AI interpretability. He shares how his team is developing techniques to understand, audit, and edit neural networks at the feature level, including breakthrough results in resolving superposition with sparse autoencoders, successful model editing demonstrations, and real-world applications in genomics with Arc Institute's DNA foundation models. Ho argues that these white-box approaches are essential for building safe, reliable, and intentionally designed AI systems.