Large language models

Introducing GPT-5

Introducing GPT-5

OpenAI introduces GPT-5, a significant upgrade focused on expert-level reasoning, agentic capabilities, and real-world utility, particularly for developers and enterprises. The model introduces a new reasoning paradigm, "software on demand" capabilities, and state-of-the-art performance on coding, reasoning, and long-context benchmarks. The launch also includes major updates to the ChatGPT application and a powerful new API for developers.

Claude for Financial Services Keynote

Claude for Financial Services Keynote

Anthropic executives and financial industry leaders from S&P Global, Deloitte, DE Shaw, HG Capital, New York Life, and the Norwegian Sovereign Wealth Fund discuss the future of AI in finance and announce Claude for Financial Analysis, a unified intelligence layer designed to transform professional workflows.

No Priors Ep. 125 | With Senior White House Policy Advisor on AI Sriram Krishnan

No Priors Ep. 125 | With Senior White House Policy Advisor on AI Sriram Krishnan

Sriram Krishnan, Senior White House Policy Advisor on AI, outlines the America AI Action Plan, a strategy designed to ensure U.S. dominance in artificial intelligence. He discusses the three core pillars of the plan—infrastructure, innovation, and global standards—while also exploring the geopolitical race with China, the critical role of open-source models, and the need for America to own the full AI stack, from GPUs to applications.

Scaling and the Road to Human-Level AI | Anthropic Co-founder Jared Kaplan

Scaling and the Road to Human-Level AI | Anthropic Co-founder Jared Kaplan

Jared Kaplan, co-founder of Anthropic, explains how the discovery of predictable, physics-like scaling laws in AI training provides a clear roadmap for progress. He details the two main phases of model training (pre-training and RL), discusses how scaling compute predictably unlocks longer-horizon task capabilities, and outlines the remaining challenges—memory, nuanced oversight, and organizational knowledge—on the path to human-level AI.

Building Better Language Models Through Global Understanding

Building Better Language Models Through Global Understanding

Dr. Mazi Fadai discusses the critical challenges in multilingual AI, including data imbalances and flawed evaluation methodologies. She argues that tackling these difficult multilingual problems is not only essential for global accessibility but also a catalyst for fundamental AI innovation, much like how machine translation research led to the Transformer architecture. The talk introduces new, more culturally aware evaluation benchmarks like Global MMLU and INCLUDE as a path toward building more robust and globally representative language models.

Intelligence = Doing More with Less (David Krakauer)

Intelligence = Doing More with Less (David Krakauer)

Prof. David Krakauer argues that we are confusing knowledge with intelligence. He critiques the AI community's superficial definition of "emergence" in LLMs, contrasting it with the true meaning from complex systems: a fundamental change in internal organization that allows for a simpler, more powerful macroscopic description. He introduces "exbodiment"—outsourcing cognition to external tools—as a key part of collective intelligence, but warns that our evolutionary drive to conserve energy will lead us to outsource our thinking to AI, causing a "diminution and dilution" of human thought.