Rlhf

913: LLM Pre-Training and Post-Training 101 — with Julien Launay

913: LLM Pre-Training and Post-Training 101 — with Julien Launay

Julien Launay, CEO of Adaptive ML, discusses the evolution of Large Language Model (LLM) training, detailing the critical shift from pre-training to post-training with Reinforcement Learning (RL). He explains the nuances of RL feedback mechanisms (RLHF, RLEF, RLAIF), the role of synthetic data, and how his company provides the "RLOps" tooling to make these powerful techniques accessible to enterprises. The conversation also explores the future of AI, including scaling beyond data limitations and the path to a "spiky" AGI.

No Priors Ep. 124 | With SurgeAI Founder and CEO Edwin Chen

No Priors Ep. 124 | With SurgeAI Founder and CEO Edwin Chen

Edwin Chen, CEO of Surge AI, discusses the critical role of high-quality human data in training frontier models, the flaws in current evaluation benchmarks like LMSys and IF-Eval, the future of complex RL environments, and why he bootstrapped Surge to over $1 billion in revenue.