Fine tuning

Context Engineering: Lessons Learned from Scaling CoCounsel

Context Engineering: Lessons Learned from Scaling CoCounsel

Jake Heller, founder of Casetext, shares a pragmatic framework for turning powerful large language models like GPT-4 into reliable, professional-grade products. He details a rigorous, evaluation-driven approach to prompt and context engineering, emphasizing iterative testing, the critical role of high-quality context, and advanced techniques like reinforcement fine-tuning and strategic model selection.

The Truth About LLM Training

The Truth About LLM Training

Paul van der Boor and Zulkuf Genc from Prosus discuss the practical realities of deploying AI agents in production. They cover their in-house evaluation framework, strategies for navigating the GPU market, the importance of fine-tuning over building from scratch, and how they use AI to analyze usage patterns in a privacy-preserving manner.

Streamline evaluation, monitoring, optimization of AI data flywheel with NVIDIA and Weights & Biases

Streamline evaluation, monitoring, optimization of AI data flywheel with NVIDIA and Weights & Biases

A walkthrough of the NVIDIA Data Flywheel Blueprint, demonstrating how to use production data and Weights & Biases to systematically fine-tune AI agents. This process enhances model accuracy and efficiency by creating a continuous improvement cycle, moving beyond the limitations of prompt engineering.

Arvind Jain on building Glean and the future of enterprise AI

Arvind Jain on building Glean and the future of enterprise AI

Arvind Jain, CEO of Glean, details the company's journey from a pre-LLM enterprise search innovator to a leading AI agent platform. He covers their hybrid model strategy, the critical role of permission-aware RAG for security, and how AI agents are creating 'evergreen' documentation and reshaping enterprise workflows.

The 2025 AI Engineering Report — Barr Yaron, Amplify

The 2025 AI Engineering Report — Barr Yaron, Amplify

Barr Yaon of Amplify Partners presents early findings from the 2025 State of AI Engineering survey, covering LLM usage, customization techniques like RAG and fine-tuning, the state of AI agents, key challenges like evaluation, and community perspectives on the future of AI.

9 Commandments for Building AI Agents

9 Commandments for Building AI Agents

A deep dive into the design principles for building effective AI agents, covering the evolution of the ReAct loop, the critical role of memory and learning from experience, the 'build vs. buy' dilemma for tooling, and the importance of abstracting all capabilities—including systems and people—as tools.