Llm

The Top 100 Most Used AI Apps in 2025

The Top 100 Most Used AI Apps in 2025

In the fifth edition of the a16z Consumer AI 100, an analysis of the most-used AI-native products reveals a market that is beginning to stabilize after a period of chaotic growth. Key trends identified include the continued dominance of AI companionship and creative tools, the significant market entry of major players like Google and xAI's Grok, the rise of Chinese AI companies on the global stage, and the emergence of a powerful new category: "vibe coding." The data suggests a future of increased verticalization, prosumer tool adoption, and the development of more sophisticated network effects beyond simple data acquisition.

Too much lock-in for too little gain: agent frameworks are a dead-end // Valliappa Lakshmanan

Too much lock-in for too little gain: agent frameworks are a dead-end // Valliappa Lakshmanan

Lak Lakshmanan presents a robust architecture for building production-quality, framework-agnostic agentic systems. He advocates for using simple, composable GenAI patterns, off-the-shelf tools for governance, and a strong emphasis on a human-in-the-loop design to create continuously learning systems that avoid vendor lock-in.

From Spikes to Stories: AI-Augmented Troubleshooting in the Network Wild // Shraddha Yeole

From Spikes to Stories: AI-Augmented Troubleshooting in the Network Wild // Shraddha Yeole

Shraddha Yeole from Cisco ThousandEyes explains how they are transforming network observability by moving from complex dashboards to AI-augmented storytelling. The session details their use of an LLM-powered agent to interpret vast telemetry data, accelerate fault isolation, and improve MTTR, covering the technical architecture, advanced prompt engineering techniques, evaluation strategies, and key challenges.

AI Agents & LLMs: Real-Time IT Issue Prediction & Prevention

AI Agents & LLMs: Real-Time IT Issue Prediction & Prevention

Amanda Downie explains the shift from reactive IT firefighting to proactive optimization, detailing how AI agents and LLMs use predictive analytics, topology mapping, and continuous learning loops to anticipate and prevent system issues before they occur.

Advanced Context Engineering for Agents

Advanced Context Engineering for Agents

Dexter Horthy of Human Layer explains why naive AI coding agents fail in complex software projects and introduces 'Advanced Context Engineering.' He details a spec-first, three-phase workflow (Research, Plan, Implement) designed to manage context intentionally, keeping utilization below 40% to maximize model performance. This approach uses subagents and frequent compaction to turn AI from a prototyping tool into a production-ready system for large, brownfield codebases.

Using LongMemEval to Improve Agent Memory

Using LongMemEval to Improve Agent Memory

Sam Bhagwat of Mastra details their process for optimizing AI agent memory using the Long Mem Eval benchmark. He breaks down memory into subtasks like temporal reasoning and knowledge updates, and shares how targeted improvements—such as tailored templates, targeted data updates, and structured message formatting—led to state-of-the-art performance, emphasizing the importance of iterative evaluation.