A deep dive into the OpenAI Responses API, covering its architecture, advantages over Chat Completions, and practical applications for building persistent, multimodal agents with GPT-5, including live demos on migration and multi-tool workflows.
Orchestrating Complex AI Workflows with AI Agents & LLMs
Eric Pritchett, President and COO of Terzo, explains the transformative impact of AI agents and LLMs on workflow orchestration. He contrasts the goal-oriented, flexible nature of AI agents with the limitations of traditional RPA, illustrating how a multi-agent system can automate complex processes like quote generation, marking a paradigm shift in automation capabilities.
Why AI Will Create Abundance and Transform Customer Experience: Cresta CEO Ping Wu
Ping Wu, CEO of Cresta, and Sequoia’s Doug Leone discuss the transformation of contact centers with AI. They explore a dual approach, blending AI-powered human agent assistance with full automation, to meet enterprises where they are. Wu details the immense technical challenges of deploying AI at scale, from orchestrating over 20 models in real-time with sub-800ms latency to solving for legacy on-premise systems. Leone provides a framework for building AI companies at speed, arguing that value will accrue in the application layer and that we are at the beginning of an "Industrial Revolution 2.0".
How to build agents that take ACTION
Alex Salazar, CEO of Arcade, argues that the true value of AI is not in chatbots but in agents that can take real-world actions. He details the primary reasons agents fail to reach production—security, cost, latency, and accuracy—and introduces an "Agent Hierarchy of Needs" as a framework for building robust, production-ready agents. The talk emphasizes a critical shift from exposing raw APIs to building intention-based tools and solving the complex challenge of agent authorization through a delegated model.
Columbia CS Professor: Why LLMs Can’t Discover New Science
Professor Vishal Misra of Columbia University introduces a formal model for understanding Large Language Models (LLMs) based on information theory. He explains how LLMs reason by navigating "Bayesian manifolds", using concepts like token entropy to explain the mechanics of chain-of-thought, and defines true AGI as the ability to create new manifolds rather than just exploring existing ones.
MCP vs gRPC: How AI Agents & LLMs Connect to Tools & Data
A deep dive into how AI agents connect to external tools, comparing the AI-native Model Context Protocol (MCP) with the high-performance gRPC framework. The summary explores their respective architectures, discovery mechanisms, and performance trade-offs, concluding with a vision for their complementary roles in future AI systems.