Large language models

No Priors Ep. 123 | With ReflectionAI Co-Founder and CEO Misha Laskin

No Priors Ep. 123 | With ReflectionAI Co-Founder and CEO Misha Laskin

Misha Laskin, co-founder of Reflection AI and former researcher at Google DeepMind, discusses the company's mission to build superhuman autonomous systems. He introduces Asimov, a code comprehension agent designed to solve the 80% of an engineer's time spent on understanding complex systems, rather than just code generation. Laskin delves into the intricacies of co-designing product and research, the critical role of customer-driven evaluations, the bottlenecks in scaling reinforcement learning (RL) — particularly the "reward problem" — and why he believes the future is one of "jagged superintelligence" emerging in specific, high-value domains like coding.

Building Production-Grade RAG at Scale

Building Production-Grade RAG at Scale

Douwe Kiela, CEO of Contextual AI, explains the evolution from basic RAG to "RAG 2.0", an end-to-end, trainable system. He argues that this system-level approach, which integrates optimized document parsing, retrieval, reranking, and grounded models, is superior to relying on massive context windows alone and is a fundamental tool for next-generation AI agents.