Feature

Why humans are AI's biggest bottleneck (and what's coming in 2026) | Alexander Embiricos (OpenAI)

Why humans are AI's biggest bottleneck (and what's coming in 2026) | Alexander Embiricos (OpenAI)

Alexander Embiricos, product lead for OpenAI's Codex, discusses the vision of AI as a proactive software engineering teammate, the product decisions that led to its explosive 20x growth, and why the real bottleneck to AGI-level productivity is shifting from model capability to human review speed.

Why humans are AI's biggest bottleneck (and what's coming in 2026) | Alexander Embiricos (OpenAI)

Why humans are AI's biggest bottleneck (and what's coming in 2026) | Alexander Embiricos (OpenAI)

Alexander Embiricos, product lead for OpenAI's Codex, shares the vision of AI as a software engineering teammate, not just a tool. He explains how a strategic shift to a local, interactive experience unlocked 20x growth, details how the Sora Android app was built in 28 days, and argues that the real bottleneck to AGI-level productivity is now human review speed, not model capability.

The Mathematical Foundations of Intelligence [Professor Yi Ma]

The Mathematical Foundations of Intelligence [Professor Yi Ma]

Professor Yi Ma challenges our understanding of intelligence, proposing a unified mathematical theory based on two principles: parsimony and self-consistency. He argues that current large models merely memorize statistical patterns in already-compressed human knowledge (like text) rather than achieving true understanding. This framework re-contextualizes deep learning as a process of compression and denoising, allowing for the derivation of Transformer architectures like CRATE from first principles, paving the way for a more interpretable, white-box approach to AI.

The arrival of AGI | Shane Legg (co-founder of DeepMind)

The arrival of AGI | Shane Legg (co-founder of DeepMind)

Shane Legg, Chief AGI Scientist at Google DeepMind, outlines his framework for AGI levels, predicts a 50% chance of minimal AGI by 2028, and discusses the profound societal and economic transformations that will follow.

Efficient Reinforcement Learning – Rhythm Garg & Linden Li, Applied Compute

Efficient Reinforcement Learning – Rhythm Garg & Linden Li, Applied Compute

At Applied Compute, efficient Reinforcement Learning is critical for delivering business value. This talk explores the transition from inefficient synchronous RL to a high-throughput asynchronous 'Pipeline RL' system. The core challenge is managing 'staleness'—a side effect of in-flight weight updates that can destabilize training. The speakers detail their first-principles systems model, based on the Roofline model, used to simulate and find the optimal allocation of GPU resources between sampling and training, balancing throughput with algorithmic stability and achieving significant speedups.

Tensor Logic "Unifies" AI Paradigms [Pedro Domingos]

Tensor Logic "Unifies" AI Paradigms [Pedro Domingos]

Pedro Domingos introduces Tensor Logic, a new programming language designed to be the fundamental language for AI. It unifies the two major paradigms: the learning capabilities of deep learning (neural networks) and the transparent, verifiable reasoning of symbolic AI (logic programming), aiming to solve critical issues like hallucination and the opacity of current models.