Causal ai

915: How to Jailbreak LLMs (and How to Prevent It) — with Michelle Yi

915: How to Jailbreak LLMs (and How to Prevent It) — with Michelle Yi

Tech leader and investor Michelle Yi discusses the critical technical aspects of building trustworthy AI systems. She delves into adversarial attack and defense mechanisms, including red teaming, data poisoning, prompt stealing, and "slop squatting," and explores how advanced concepts like Constitutional AI and World Models can create safer, more reliable AI.

912: In Case You Missed It in July 2025  — with Jon Krohn (@JonKrohnLearns)

912: In Case You Missed It in July 2025 — with Jon Krohn (@JonKrohnLearns)

A review of five key interviews covering the importance of data-centric AI (DMLR) in specialized fields like law, the challenges of AI benchmarking, strategies for domain-specific model selection using red teaming, the power of AI in predicting human behavior, and the shift towards building causal AI models.

909: Causal AI — with Dr. Robert Usazuwa Ness

909: Causal AI — with Dr. Robert Usazuwa Ness

Researcher Robert Ness discusses the practical implementation of Causal AI, distinguishing it from correlation-based machine learning. He covers the essential role of assumptions about the data-generating process, key Python libraries like DoWhy and Pyro, the intersection with LLMs, and a step-by-step workflow for tackling causal problems.