Rag

Context Engineering & Agentic Search with the CEO of Chroma

Context Engineering & Agentic Search with the CEO of Chroma

Jeff Huber, CEO of Chroma, discusses "context rot," the degradation of AI performance in large context windows, and outlines a new vision for retrieval infrastructure. He covers the evolution of search, the importance of a two-stage recall-then-precision pipeline, and the challenges of agentic memory, advocating for a shift from AI "alchemy" to reliable engineering.

Fine-Tuned Models Are Getting Out of Hand

Fine-Tuned Models Are Getting Out of Hand

A deep dive into how fine-tuned Small Language Models (SLMs) and RAG systems can be combined to create personalized AI agents that learn user-specific workflows, emulate decision-making, and collaborate with humans, moving beyond conversational interfaces to direct action within enterprise environments.

Build Hour: AgentKit

Build Hour: AgentKit

A deep dive into OpenAI's AgentKit, demonstrating how to visually build, deploy, and optimize multi-step, tool-calling agents using Agent Builder, ChatKit, and the integrated Evals platform.

How AI Agents and Decision Agents Combine Rules & ML in Automation

How AI Agents and Decision Agents Combine Rules & ML in Automation

A detailed breakdown of a multi-method Agentic AI architecture, combining Large Language Models (LLMs) with traditional automation like workflow and decision engines to solve complex, real-world problems like loan processing.

Al Engineering 101 with Chip Huyen (Nvidia, Stanford, Netflix)

Al Engineering 101 with Chip Huyen (Nvidia, Stanford, Netflix)

Chip Huyen, an AI expert and author of 'AI Engineering', explains the realities of building successful AI applications. She covers the nuances of model training, the critical role of data quality in RAG systems, the mechanics of RLHF, and why the future of AI improvement lies in post-training, system-level thinking, and solving UX problems rather than just chasing the newest models.

Columbia CS Professor: Why LLMs Can’t Discover New Science

Columbia CS Professor: Why LLMs Can’t Discover New Science

Professor Vishal Misra of Columbia University introduces a formal model for understanding Large Language Models (LLMs) based on information theory. He explains how LLMs reason by navigating "Bayesian manifolds", using concepts like token entropy to explain the mechanics of chain-of-thought, and defines true AGI as the ability to create new manifolds rather than just exploring existing ones.