Neural networks

Rivian’s Roadmap to AI Architecture and Autonomy with Founder and CEO RJ Scaringe

Rivian’s Roadmap to AI Architecture and Autonomy with Founder and CEO RJ Scaringe

Rivian CEO RJ Scaringe discusses the company's complete pivot from a rules-based '1.0' autonomy system to a vertically integrated, neural network-based architecture. He outlines the essential ingredients for success in autonomous driving—from custom inference chips to a robust data flywheel—and explains why a software-defined vehicle architecture is non-negotiable for survival. Scaringe also touches on the upcoming R2 model, the importance of market choice, and how superior, proprietary data will be the key differentiator in the age of AI-driven vehicles.

Sparse Activation is the Future of AI (with Adrian Kosowski)

Sparse Activation is the Future of AI (with Adrian Kosowski)

Adrian Kosowski from Pathway explains their groundbreaking research on sparse activation in AI, moving beyond the dense architectures of transformers. Their model, Baby Dragon Hatchling (BDH), mimics the brain's efficiency by activating only a small fraction of its artificial neurons, enabling a new, more scalable, and compositional approach to reasoning that isn't confined by the vector space limitations of current models.

The Moonshot Podcast Deep Dive: Jeff Dean on Google Brain’s Early Days

The Moonshot Podcast Deep Dive: Jeff Dean on Google Brain’s Early Days

Google DeepMind’s Chief Scientist Jeff Dean discusses the origins of his work on scaling neural networks, the founding of the Google Brain team, the technical breakthroughs that enabled training massive models, the development of TensorFlow and TPUs, and his perspective on the evolution and future of artificial intelligence.

The Moonshot Podcast Deep Dive: Andrew Ng on Deep Learning and Google Brain

The Moonshot Podcast Deep Dive: Andrew Ng on Deep Learning and Google Brain

Andrew Ng, founder of Google Brain and DeepLearning.AI, discusses the history of neural networks and the foundational ideas that led to modern AI breakthroughs. He covers the controversial early bets on scale and general-purpose algorithms, the technical innovations behind Transformers, and the future democratizing effect of artificial intelligence.

Mapping the Mind of a Neural Net: Goodfire’s Eric Ho on the Future of Interpretability

Mapping the Mind of a Neural Net: Goodfire’s Eric Ho on the Future of Interpretability

Eric Ho, founder of Goodfire, discusses the critical challenge of AI interpretability. He shares how his team is developing techniques to understand, audit, and edit neural networks at the feature level, including breakthrough results in resolving superposition with sparse autoencoders, successful model editing demonstrations, and real-world applications in genomics with Arc Institute's DNA foundation models. Ho argues that these white-box approaches are essential for building safe, reliable, and intentionally designed AI systems.