Posts

Why humans are AI's biggest bottleneck (and what's coming in 2026) | Alexander Embiricos (OpenAI)

Why humans are AI's biggest bottleneck (and what's coming in 2026) | Alexander Embiricos (OpenAI)

Alexander Embiricos, product lead for OpenAI's Codex, shares the vision of AI as a software engineering teammate, not just a tool. He explains how a strategic shift to a local, interactive experience unlocked 20x growth, details how the Sora Android app was built in 28 days, and argues that the real bottleneck to AGI-level productivity is now human review speed, not model capability.

The Mathematical Foundations of Intelligence [Professor Yi Ma]

The Mathematical Foundations of Intelligence [Professor Yi Ma]

Professor Yi Ma challenges our understanding of intelligence, proposing a unified mathematical theory based on two principles: parsimony and self-consistency. He argues that current large models merely memorize statistical patterns in already-compressed human knowledge (like text) rather than achieving true understanding. This framework re-contextualizes deep learning as a process of compression and denoising, allowing for the derivation of Transformer architectures like CRATE from first principles, paving the way for a more interpretable, white-box approach to AI.

The Mathematical Foundations of Intelligence [Professor Yi Ma]

The Mathematical Foundations of Intelligence [Professor Yi Ma]

Professor Yi Ma presents a unified mathematical theory of intelligence built on two principles: parsimony and self-consistency. He challenges the notion that large language models (LLMs) understand, arguing they are sophisticated memorization systems, and demonstrates how architectures like the Transformer can be derived from the first principle of compression.

The Mathematical Foundations of Intelligence [Professor Yi Ma]

The Mathematical Foundations of Intelligence [Professor Yi Ma]

Professor Yi Ma presents a unified mathematical theory of intelligence based on two principles: Parsimony and Self-Consistency. He argues that current AI, particularly LLMs, excels at memorization by compressing already-compressed human knowledge (text), but fails at true abstraction and understanding. His framework, centered on maximizing the coding rate reduction of data, provides a first-principles derivation for architectures like Transformers (CRATE) and explains phenomena like the effectiveness of gradient descent through the concept of benign non-convex landscapes.

The arrival of AGI | Shane Legg (co-founder of DeepMind)

The arrival of AGI | Shane Legg (co-founder of DeepMind)

Shane Legg, Chief AGI Scientist at Google DeepMind, outlines his framework for AGI, predicting 'minimal AGI' within years and 'full AGI' within a decade. He details a path to more reliable systems and introduces 'System 2 Safety' for building ethical AI. Legg issues an urgent call for society to prepare for the massive economic and structural transformations that advanced AI will inevitably bring.

The arrival of AGI | Shane Legg (co-founder of DeepMind)

The arrival of AGI | Shane Legg (co-founder of DeepMind)

Shane Legg, Chief AGI Scientist at Google DeepMind, outlines his framework for AGI levels, predicts a 50% chance of minimal AGI by 2028, and discusses the profound societal and economic transformations that will follow.