Context engineering

Conext Engineering for Engineers

Conext Engineering for Engineers

Jeff Huber of Chroma argues that building reliable AI systems hinges on 'Context Engineering'—the deliberate curation of information within the context window. He challenges the efficacy of long-context models, presenting a 'Gather and Glean' framework to maximize recall and precision, and discusses specific challenges and techniques for AI agents, such as intelligent compaction.

Context Engineering: Lessons Learned from Scaling CoCounsel

Context Engineering: Lessons Learned from Scaling CoCounsel

Jake Heller, founder of Casetext, shares a pragmatic framework for turning powerful large language models like GPT-4 into reliable, professional-grade products. He details a rigorous, evaluation-driven approach to prompt and context engineering, emphasizing iterative testing, the critical role of high-quality context, and advanced techniques like reinforcement fine-tuning and strategic model selection.

Five hard earned lessons about Evals — Ankur Goyal, Braintrust

Five hard earned lessons about Evals — Ankur Goyal, Braintrust

Building successful AI applications requires a sophisticated engineering approach that goes beyond prompt engineering. This involves creating intentionally engineered evaluations (evals) that reflect user feedback, focusing on "context engineering" to optimize tool definitions and outputs, and maintaining a flexible, model-agnostic architecture to adapt to the rapidly evolving AI landscape.

12-factor Agents - Patterns of reliable LLM applications // Dexter Horthy

12-factor Agents - Patterns of reliable LLM applications // Dexter Horthy

Drawing from conversations with top AI builders, Dex argues that production-grade AI agents are not magical loops but well-architected software. This talk introduces "12-Factor Agents," a methodology centered on "Context Engineering" to build reliable, high-performance LLM-powered applications by applying rigorous software engineering principles.

How Grounded Synthetic Data is Saving the Publishing Industry // Robert Caulk

How Grounded Synthetic Data is Saving the Publishing Industry // Robert Caulk

Robert from Emergent Methods discusses how grounded synthetic news data can solve the publisher revenue crisis in the AI era. He details the process of 'Context Engineering' news into token-optimized, objective data for high-stakes AI agent tasks, covering their open-source models for entity extraction and bias mitigation, and the on-premise infrastructure that protects publisher content.

Making Your Data Agent-Ready with EnrichMCP // Simba Khadder // Agents in Production 2025

Making Your Data Agent-Ready with EnrichMCP // Simba Khadder // Agents in Production 2025

Simba Khadder explains that the primary bottleneck for LLM agents is not intelligence, but access to structured data. He introduces EnrichMCP, an open-source framework that creates a semantic layer over data models, enabling agents to discover, reason about, and query data sources like SQL databases effectively, moving beyond the limitations of RAG and direct API conversions.