Llms

Traditional vs LLM Recommender Systems: Are They Worth It?

Traditional vs LLM Recommender Systems: Are They Worth It?

This summary explores Arpita Vats's insights on how Large Language Models (LLMs) are revolutionizing recommender systems. It contrasts the traditional feature-engineering-heavy approach with the contextual understanding of LLMs, which shifts the focus to prompt engineering. Key challenges like inference latency and cost are discussed, along with practical solutions such as lightweight models, knowledge distillation, and hybrid architectures. The conversation also touches on advanced applications like sequential recommendation and the future potential of agentic AI.

9 Commandments for Building AI Agents

9 Commandments for Building AI Agents

A deep dive into the design principles for building effective AI agents, covering the evolution of the ReAct loop, the critical role of memory and learning from experience, the 'build vs. buy' dilemma for tooling, and the importance of abstracting all capabilities—including systems and people—as tools.

909: Causal AI — with Dr. Robert Usazuwa Ness

909: Causal AI — with Dr. Robert Usazuwa Ness

Researcher Robert Ness discusses the practical implementation of Causal AI, distinguishing it from correlation-based machine learning. He covers the essential role of assumptions about the data-generating process, key Python libraries like DoWhy and Pyro, the intersection with LLMs, and a step-by-step workflow for tackling causal problems.

The U.S. Can’t Build AI Without These Materials

The U.S. Can’t Build AI Without These Materials

The Western mining industry is broken, hampered by a talent drain, slow technology adoption, and misaligned incentives. A new, vertically integrated, software-first approach leveraging Reinforcement Learning (RL) and LLMs can build and operate mines and refineries faster, cheaper, and more flexibly, addressing critical geopolitical supply chain risks.

907: Neuroscience, AI and the Limitations of LLMs — with Dr. Zohar Bronfman

907: Neuroscience, AI and the Limitations of LLMs — with Dr. Zohar Bronfman

Zohar Bronfman discusses why current LLMs are not on a path to AGI, contrasting their combinatorial creativity with the transformational, domain-general intelligence of humans. He argues that predictive models, not generative ones, deliver the most business value and explains how his platform, Pecan AI, automates the critical data preparation bottleneck to democratize predictive analytics for all businesses.