Prompt engineering

Traditional vs LLM Recommender Systems: Are They Worth It?

Traditional vs LLM Recommender Systems: Are They Worth It?

This summary explores Arpita Vats's insights on how Large Language Models (LLMs) are revolutionizing recommender systems. It contrasts the traditional feature-engineering-heavy approach with the contextual understanding of LLMs, which shifts the focus to prompt engineering. Key challenges like inference latency and cost are discussed, along with practical solutions such as lightweight models, knowledge distillation, and hybrid architectures. The conversation also touches on advanced applications like sequential recommendation and the future potential of agentic AI.

On Engineering AI Systems that Endure The Bitter Lesson - Omar Khattab, DSPy & Databricks

On Engineering AI Systems that Endure The Bitter Lesson - Omar Khattab, DSPy & Databricks

Omar Khattab, creator of DSPy, reinterprets the 'Bitter Lesson' for AI engineering, arguing that the key to building robust and enduring AI systems is to move beyond brittle prompt engineering. He advocates for a declarative, modular approach that separates the fundamental program logic from the rapidly changing landscape of LLMs, optimizers, and inference techniques.

Evals Are Not Unit Tests — Ido Pesok, Vercel v0

Evals Are Not Unit Tests — Ido Pesok, Vercel v0

Ido Pesok from Vercel explains why LLM-based applications often fail in production despite successful demos, and presents a systematic framework for building reliable AI systems using application-layer evaluations ("evals").

Real World Development with GitHub Copilot and VS Code — Harald Kirschner, Christopher Harrison

Real World Development with GitHub Copilot and VS Code — Harald Kirschner, Christopher Harrison

A deep dive into "Vibe Coding," a development methodology that prioritizes outcomes over code-level details, using the advanced AI features of VS Code and GitHub Copilot. The talk explores three stages of this methodology—YOLO, Structured, and Spectrum—and demonstrates how to leverage agent modes, custom instructions, reusable prompts, and the Model Copilot Protocol (MCP) to enhance productivity from rapid prototyping to enterprise-scale development.

The 2025 AI Engineering Report — Barr Yaron, Amplify

The 2025 AI Engineering Report — Barr Yaron, Amplify

Barr Yaon of Amplify Partners presents early findings from the 2025 State of AI Engineering survey, covering LLM usage, customization techniques like RAG and fine-tuning, the state of AI agents, key challenges like evaluation, and community perspectives on the future of AI.

Prompt Engineering for Generative AI • James Phoenix, Mike Taylor & Phil Winder

Prompt Engineering for Generative AI • James Phoenix, Mike Taylor & Phil Winder

Authors James Phoenix and Mike Taylor discuss the evolution of prompt engineering from a creative art to a rigorous engineering discipline. They cover the core principles of prompting, the importance of programmatic evaluation, the role of agents, and how to manage application lifecycles as models evolve.