Evaluation

The Truth About LLM Training

The Truth About LLM Training

Paul van der Boor and Zulkuf Genc from Prosus discuss the practical realities of deploying AI agents in production. They cover their in-house evaluation framework, strategies for navigating the GPU market, the importance of fine-tuning over building from scratch, and how they use AI to analyze usage patterns in a privacy-preserving manner.

How to look at your data — Jeff Huber (Choma) + Jason Liu (567)

How to look at your data — Jeff Huber (Choma) + Jason Liu (567)

A detailed summary of a talk by Jeff Huber (Chroma) and Jason Liu on systematically improving AI applications. The talk covers using fast, inexpensive evaluations for retrieval systems (inputs) and applying structured data analysis and clustering to conversational logs (outputs) to derive actionable product insights.

Evals Are Not Unit Tests — Ido Pesok, Vercel v0

Evals Are Not Unit Tests — Ido Pesok, Vercel v0

Ido Pesok from Vercel explains why LLM-based applications often fail in production despite successful demos, and presents a systematic framework for building reliable AI systems using application-layer evaluations ("evals").

The Hidden Bottlenecks Slowing Down AI Agents

The Hidden Bottlenecks Slowing Down AI Agents

Paul van der Boor and Bruce Martens from Prosus discuss the real bottlenecks in AI agent development, arguing that the primary challenges are not tools, but rather evaluation, data quality, and feedback loops. They detail their 'buy-first' philosophy, the practical reasons they often build in-house, and how new coding agents like Devon and Cursor are changing their development workflows.

Building Better Language Models Through Global Understanding

Building Better Language Models Through Global Understanding

Dr. Mazi Fadai discusses the critical challenges in multilingual AI, including data imbalances and flawed evaluation methodologies. She argues that tackling these difficult multilingual problems is not only essential for global accessibility but also a catalyst for fundamental AI innovation, much like how machine translation research led to the Transformer architecture. The talk introduces new, more culturally aware evaluation benchmarks like Global MMLU and INCLUDE as a path toward building more robust and globally representative language models.

Strategies for LLM Evals (GuideLLM, lm-eval-harness, OpenAI Evals Workshop) — Taylor Jordan Smith

Strategies for LLM Evals (GuideLLM, lm-eval-harness, OpenAI Evals Workshop) — Taylor Jordan Smith

Traditional benchmarks and leaderboards are insufficient for production AI. This summary details a practical, multi-layered evaluation strategy, moving from foundational system performance to factual accuracy and finally to safety and bias, using open-source tools like GuideLLM, lm-eval-harness, and Promptfoo.