Ai ethics

The arrival of AGI | Shane Legg (co-founder of DeepMind)

The arrival of AGI | Shane Legg (co-founder of DeepMind)

Shane Legg, Chief AGI Scientist at Google DeepMind, outlines his framework for AGI, predicting 'minimal AGI' within years and 'full AGI' within a decade. He details a path to more reliable systems and introduces 'System 2 Safety' for building ethical AI. Legg issues an urgent call for society to prepare for the massive economic and structural transformations that advanced AI will inevitably bring.

The arrival of AGI | Shane Legg (co-founder of DeepMind)

The arrival of AGI | Shane Legg (co-founder of DeepMind)

Shane Legg, Chief AGI Scientist at Google DeepMind, outlines his framework for AGI levels, predicts a 50% chance of minimal AGI by 2028, and discusses the profound societal and economic transformations that will follow.

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

Emmett Shear, founder of Twitch and former OpenAI interim CEO, presents a new paradigm for AI alignment called "organic alignment." He argues that the prevalent "steering and control" model is fundamentally flawed, potentially leading to disaster. Shear advocates for developing AI systems that learn to genuinely care about humans, treating alignment as a continuous process rather than a fixed state.

Reid Hoffman on AI, Consciousness, and the Future of Humanity

Reid Hoffman on AI, Consciousness, and the Future of Humanity

Reid Hoffman explores the future of AI, moving beyond obvious productivity applications to tackle grand challenges in science and industry. He discusses the current limitations of LLMs in reasoning, the distinction between augmenting and replacing human experts, the philosophical questions of consciousness, and the enduring power of human connection in the age of AI.

Evaluating the Cultural Relevance of AI Models and Products: Insights from the YUX Team

Evaluating the Cultural Relevance of AI Models and Products: Insights from the YUX Team

Drawing from their work fine-tuning an ASR model in Wolof and building a stereotype detection dataset, researchers from YUX share a practical toolbox for evaluating the cultural relevance of AI models and products. The session covers methods for data collection, model benchmarking, user testing, and introduces LOOKA, a platform for scalable human evaluation in the African context.

The Limits of AI: Generative AI, NLP, AGI, & What’s Next?

The Limits of AI: Generative AI, NLP, AGI, & What’s Next?

Exploring the evolution of AI, this summary breaks down the Data-Information-Knowledge-Wisdom hierarchy, revisits past predictions about AI's limits that have since been surpassed—such as reasoning and creativity—and delves into current challenges like hallucinations, AGI, and sustainability. It concludes by framing a collaborative future where humans define the 'what' and 'why,' while AI executes the 'how'.