Ai agents

AI Agents & LLMs: Real-Time IT Issue Prediction & Prevention

AI Agents & LLMs: Real-Time IT Issue Prediction & Prevention

Amanda Downie explains the shift from reactive IT firefighting to proactive optimization, detailing how AI agents and LLMs use predictive analytics, topology mapping, and continuous learning loops to anticipate and prevent system issues before they occur.

Building the Universal AI Automation Layer ft n8n CEO Jan Oberhauser

Building the Universal AI Automation Layer ft n8n CEO Jan Oberhauser

Jan Oberhauser, founder of n8n, discusses the company's strategic pivot from a workflow tool to an AI automation platform. He explains how focusing on community, adopting a "connect everything to anything" philosophy, and enabling the creation of complex AI agents led to a 4x revenue increase in just eight months.

Using LongMemEval to Improve Agent Memory

Using LongMemEval to Improve Agent Memory

Sam Bhagwat of Mastra details their process for optimizing AI agent memory using the Long Mem Eval benchmark. He breaks down memory into subtasks like temporal reasoning and knowledge updates, and shares how targeted improvements—such as tailored templates, targeted data updates, and structured message formatting—led to state-of-the-art performance, emphasizing the importance of iterative evaluation.

Conext Engineering for Engineers

Conext Engineering for Engineers

Jeff Huber of Chroma argues that building reliable AI systems hinges on 'Context Engineering'—the deliberate curation of information within the context window. He challenges the efficacy of long-context models, presenting a 'Gather and Glean' framework to maximize recall and precision, and discusses specific challenges and techniques for AI agents, such as intelligent compaction.

Iterating on Your AI Evals // Mariana Prazeres // Agents in Production 2025

Iterating on Your AI Evals // Mariana Prazeres // Agents in Production 2025

Moving an AI agent from a promising demo to a reliable product is challenging. This talk presents a startup-friendly, iterative process for building robust evaluation frameworks, emphasizing that you must iterate on the evaluations themselves—the metrics and the data—not just the prompts and models. It outlines a practical "crawl, walk, run" approach, starting with simple heuristics and scaling to an advanced system with automated checks and human-in-the-loop validation.

Aaron Levie and Steven Sinofsky on the AI-Worker Future

Aaron Levie and Steven Sinofsky on the AI-Worker Future

Experts from a16z, Box, and Microsoft debate the definition and future of AI agents. They explore the shift from monolithic AGI to specialized agent networks, the technical challenges of autonomous systems, and how this new platform will reshape enterprise software, workflows, and the very nature of work.