First principles

The Mathematical Foundations of Intelligence [Professor Yi Ma]

The Mathematical Foundations of Intelligence [Professor Yi Ma]

Professor Yi Ma challenges our understanding of intelligence, proposing a unified mathematical theory based on two principles: parsimony and self-consistency. He argues that current large models merely memorize statistical patterns in already-compressed human knowledge (like text) rather than achieving true understanding. This framework re-contextualizes deep learning as a process of compression and denoising, allowing for the derivation of Transformer architectures like CRATE from first principles, paving the way for a more interpretable, white-box approach to AI.

The Mathematical Foundations of Intelligence [Professor Yi Ma]

The Mathematical Foundations of Intelligence [Professor Yi Ma]

Professor Yi Ma presents a unified mathematical theory of intelligence based on two principles: Parsimony and Self-Consistency. He argues that current AI, particularly LLMs, excels at memorization by compressing already-compressed human knowledge (text), but fails at true abstraction and understanding. His framework, centered on maximizing the coding rate reduction of data, provides a first-principles derivation for architectures like Transformers (CRATE) and explains phenomena like the effectiveness of gradient descent through the concept of benign non-convex landscapes.

How To Be Contrarian — And Right

How To Be Contrarian — And Right

Garry, Harj, Jared, and Diana discuss why founders should pursue contrarian ideas in a crowded AI market. They analyze how companies like Uber, Coinbase, and Flock Safety found massive success by tackling non-obvious, legally ambiguous, or seemingly impossible problems that others ignored.