Evaluation

The Hidden Bottlenecks Slowing Down AI Agents

The Hidden Bottlenecks Slowing Down AI Agents

Paul van der Boor and Bruce Martens from Prosus discuss the real bottlenecks in AI agent development, arguing that the primary challenges are not tools, but rather evaluation, data quality, and feedback loops. They detail their 'buy-first' philosophy, the practical reasons they often build in-house, and how new coding agents like Devon and Cursor are changing their development workflows.

Building Better Language Models Through Global Understanding

Building Better Language Models Through Global Understanding

Dr. Mazi Fadai discusses the critical challenges in multilingual AI, including data imbalances and flawed evaluation methodologies. She argues that tackling these difficult multilingual problems is not only essential for global accessibility but also a catalyst for fundamental AI innovation, much like how machine translation research led to the Transformer architecture. The talk introduces new, more culturally aware evaluation benchmarks like Global MMLU and INCLUDE as a path toward building more robust and globally representative language models.

Strategies for LLM Evals (GuideLLM, lm-eval-harness, OpenAI Evals Workshop) — Taylor Jordan Smith

Strategies for LLM Evals (GuideLLM, lm-eval-harness, OpenAI Evals Workshop) — Taylor Jordan Smith

Traditional benchmarks and leaderboards are insufficient for production AI. This summary details a practical, multi-layered evaluation strategy, moving from foundational system performance to factual accuracy and finally to safety and bias, using open-source tools like GuideLLM, lm-eval-harness, and Promptfoo.

MLflow 3.0: The Future of AI Agents

MLflow 3.0: The Future of AI Agents

Eric Peter from Databricks outlines the evolution from the traditional MLOps lifecycle to the more complex Agent Ops lifecycle. He details the five essential components of a successful agent development platform and introduces MLflow 3.0, a new release designed to provide a comprehensive, open-standard solution for building, evaluating, and deploying AI agents.

LLMOps for eval-driven development at scale

LLMOps for eval-driven development at scale

Mercari's engineering team shares their practical, evaluation-centric approach to LLMOps. Learn how they leverage tiered evaluations, strategic tooling for observability, and rapid iteration to productionize LLM features for over 23 million users, emphasizing that good 'evals' are often more critical than model fine-tuning or RAG.