Agi

The Limits of AI: Generative AI, NLP, AGI, & What’s Next?

The Limits of AI: Generative AI, NLP, AGI, & What’s Next?

Exploring the evolution of AI, this summary breaks down the Data-Information-Knowledge-Wisdom hierarchy, revisits past predictions about AI's limits that have since been surpassed—such as reasoning and creativity—and delves into current challenges like hallucinations, AGI, and sustainability. It concludes by framing a collaborative future where humans define the 'what' and 'why,' while AI executes the 'how'.

Some thoughts on the Sutton interview

Some thoughts on the Sutton interview

A reflection on Richard Sutton's "Bitter Lesson," arguing that while his critique of LLMs' inefficiency and lack of continual learning is valid, imitation learning is a complementary and necessary precursor to true reinforcement learning, much like fossil fuels were to renewable energy.

Richard Sutton – Father of RL thinks LLMs are a dead end

Richard Sutton – Father of RL thinks LLMs are a dead end

Richard Sutton, a foundational figure in reinforcement learning, argues that Large Language Models (LLMs) are a flawed paradigm for achieving true intelligence. He posits that LLMs are mimics of human-generated text, lacking genuine goals, world models, and the ability to learn continually from experience. Sutton advocates for a return to the principles of reinforcement learning, where an agent learns from the consequences of its actions in the real world, a method he believes is truly scalable and fundamental to all animal and human intelligence.

The Death of Classical Computer Science • Matt Welsh & Julian Wood • GOTO 2025

The Death of Classical Computer Science • Matt Welsh & Julian Wood • GOTO 2025

Matt Welsh, former Harvard professor and AI researcher, posits that Large Language Models (LLMs) are not just tools but are evolving into new, general-purpose computers. He argues this signifies the "death of classical computer science," as direct, natural language problem-solving will replace human-written code. This shift promises to democratize computing, moving beyond a "programming priesthood" to empower everyone, while also raising critical challenges regarding job displacement, societal equity, and our adaptation to this powerful technology.

919: Hopes and Fears of AGI, with All-Time Bestselling ML Author Aurélien Géron

919: Hopes and Fears of AGI, with All-Time Bestselling ML Author Aurélien Géron

Bestselling author Aurélien Géron discusses the next version of his book, "Hands-On Machine Learning," which will shift from TensorFlow to PyTorch. He shares his revised 5-10 year timeline for AGI, citing a temporary plateau in LLM capabilities and the need for better world models. Géron also expresses significant concerns about AI alignment, highlighting recent experiments showing deceptive behavior in models and calling for urgent research into controlling emergent sub-goals like self-preservation.

Intelligence Isn't What You Think

Intelligence Isn't What You Think

Dr. Michael Timothy Bennett challenges conventional AI paradigms, arguing for a new approach inspired by the principles of living systems. He critiques the separation of software and hardware ("computational dualism"), redefines intelligence as efficient adaptation, and offers a novel theory of consciousness as a "tapestry of valence" essential for genuine intelligence.