Mlops

A Playground for AI Engineers

A Playground for AI Engineers

Paulo Vasconcellos from Hotmart details their journey of building "Agent as a Product", explaining how they blend classic ML models with LLMs for efficiency, evolve their MLOps platform for the generative AI era, and create real business value through AI-powered tutors and sales agents.

Building an Orchestration Layer for Agentic Commerce at Loblaws

Building an Orchestration Layer for Agentic Commerce at Loblaws

Mefta Sadat from Loblaw Digital discusses Alfred, an agentic orchestration layer designed to run AI shopping agents reliably in production. He covers the architecture built with LangGraph and GCP, the role of the Model Context Protocol (MCP) in simplifying API interaction, and practical MLOps strategies for observability, cost management, and ensuring reliability.

How AI covered a human’s paternity leave // Quinten Rosseel

How AI covered a human’s paternity leave // Quinten Rosseel

A practitioner's guide to deploying a text-to-SQL agent in a real-world business environment. The talk covers the critical lessons learned in moving from concept to production, focusing on the importance of the communication channel (Slack), the necessity of a semantic layer over benchmark scores, and a pragmatic approach to system architecture, testing, and evaluation.

Migrating from Neptune to Weights & Biases

Migrating from Neptune to Weights & Biases

A technical guide on migrating ML experiments from Neptune to Weights & Biases, covering the migration script, API-level code changes, and best practices for organizing projects and analyzing results in the W&B platform before the Neptune sunset.

Fast & Asynchronous: Drift Your AI, Not Your GPU Bill // Artem Yushkovskiy

Fast & Asynchronous: Drift Your AI, Not Your GPU Bill // Artem Yushkovskiy

Delivery Hero presents "Asya", an open-source framework that replaces traditional AI pipelines with a distributed, asynchronous actor model. This paradigm shift dramatically lowers GPU costs and improves scalability by treating each processing step as an independent, auto-scaling microservice on Kubernetes.

Beyond the Gold Standard: Evaluating and Trusting Agents in the Wild // Sanjana Sharma

Beyond the Gold Standard: Evaluating and Trusting Agents in the Wild // Sanjana Sharma

A deep dive into the challenges of deploying AI agents in production, arguing that reliability stems not from model intelligence but from a "system-first" approach. The talk introduces a new architecture that separates the LLM's reasoning from a versioned, auditable "Context Layer" containing business logic and expert knowledge, which is continuously updated through a "Living Ground Truth" loop driven by expert feedback.