Ai quality

Big updates to mlflow 3.0

Big updates to mlflow 3.0

Databricks’ Eric Peter and Corey Zumar introduce MLflow 3.0, focusing on its new "Agentic Insights" capabilities. They demonstrate how MLflow is evolving from providing tools for manual quality assurance in Generative AI to using intelligent agents to automatically find, diagnose, and prioritize issues, significantly speeding up the development lifecycle.

[Full Workshop] Building Metrics that actually work — David Karam, Pi Labs (fmr Google Search)

[Full Workshop] Building Metrics that actually work — David Karam, Pi Labs (fmr Google Search)

This workshop, led by former Google product directors, introduces a methodology for building reliable and tunable evaluation metrics for LLM applications. It details how to create granular 'scoring systems' that break down complex evaluations into simple, objective signals, and then use these systems for model comparison, prompt optimization, and online reinforcement learning.

No Priors Ep. 124 | With SurgeAI Founder and CEO Edwin Chen

No Priors Ep. 124 | With SurgeAI Founder and CEO Edwin Chen

Edwin Chen, CEO of Surge AI, discusses the critical role of high-quality human data in training frontier models, the flaws in current evaluation benchmarks like LMSys and IF-Eval, the future of complex RL environments, and why he bootstrapped Surge to over $1 billion in revenue.