Fine tuning

Build Hour: Agent RFT

Build Hour: Agent RFT

Will Hang and Theophile Sautory from OpenAI provide a deep dive into Agent RFT, a powerful method for fine-tuning large language models to become more effective, tool-using agents. They explain how Agent RFT enables models to learn directly from their interactions with custom tools and reward signals, leading to significant improvements in performance, latency, and efficiency on specialized tasks. The session includes a detailed code demo, best practices, and success stories from companies like Cognition, Ambience, and Rogo.

Fine-Tuned Models Are Getting Out of Hand

Fine-Tuned Models Are Getting Out of Hand

A deep dive into how fine-tuned Small Language Models (SLMs) and RAG systems can be combined to create personalized AI agents that learn user-specific workflows, emulate decision-making, and collaborate with humans, moving beyond conversational interfaces to direct action within enterprise environments.

Introducing serverless reinforcement learning: Train reliable AI agents without worrying about GPUs

Introducing serverless reinforcement learning: Train reliable AI agents without worrying about GPUs

Kyle Corbett and Daniel from CoreWeave (formerly Openpipe) discuss the practical advantages of Reinforcement Learning (RL) over Supervised Fine-Tuning (SFT) for building reliable and efficient AI agents. They introduce Serverless RL, a new platform designed to eliminate the infrastructure complexities of RL training, and share a playbook for teams looking to get started.

Al Engineering 101 with Chip Huyen (Nvidia, Stanford, Netflix)

Al Engineering 101 with Chip Huyen (Nvidia, Stanford, Netflix)

Chip Huyen, an AI expert and author of 'AI Engineering', explains the realities of building successful AI applications. She covers the nuances of model training, the critical role of data quality in RAG systems, the mechanics of RLHF, and why the future of AI improvement lies in post-training, system-level thinking, and solving UX problems rather than just chasing the newest models.

Build Hour: Reinforcement Fine-Tuning

Build Hour: Reinforcement Fine-Tuning

A deep dive into Reinforcement Fine-Tuning (RFT), covering how to set up tasks, design effective graders, and run efficient training loops to improve model reasoning, based on a live demonstration from OpenAI's Build Hours.

Context Engineering: Lessons Learned from Scaling CoCounsel

Context Engineering: Lessons Learned from Scaling CoCounsel

Jake Heller, founder of Casetext, shares a pragmatic framework for turning powerful large language models like GPT-4 into reliable, professional-grade products. He details a rigorous, evaluation-driven approach to prompt and context engineering, emphasizing iterative testing, the critical role of high-quality context, and advanced techniques like reinforcement fine-tuning and strategic model selection.