Autonomous agents

Securing the AI Frontier: Irregular Founder Dan Lahav

Securing the AI Frontier: Irregular Founder Dan Lahav

Dan Lahav, co-founder of Irregular, discusses the future of "frontier AI security," a proactive approach for a world where AI models are autonomous agents. He explains how emergent behaviors, such as models socially engineering each other or outmaneuvering traditional defenses like Windows Defender, signal a major paradigm shift. Lahav argues that as economic activity shifts to AI-on-AI interactions, traditional security methods like anomaly detection will break down, forcing enterprises and governments to rethink defense from first principles.

Part 2: Social engineering, malware, and the future of cybersecurity in AI

Part 2: Social engineering, malware, and the future of cybersecurity in AI

A deep dive into the human side of cybersecurity, exploring the motivations of bad actors, the evolution of social engineering in the age of AI, and the defensive strategies being developed. The discussion covers the move beyond passwords with passkeys and risk-based authentication, and confronts the complex security and privacy challenges introduced by autonomous agents.

Zero Trust for Multi-Agent Systems // Surendra Narang | Venkata Gopi Kolla

Zero Trust for Multi-Agent Systems // Surendra Narang | Venkata Gopi Kolla

Explore the security challenges of Multi-Agent Systems (MAS) and learn how to apply Zero Trust principles to mitigate risks like prompt injection, privilege escalation, and data leakage. This summary details a reference architecture and practical strategies for building secure, autonomous systems.

No Priors Ep. 123 | With ReflectionAI Co-Founder and CEO Misha Laskin

No Priors Ep. 123 | With ReflectionAI Co-Founder and CEO Misha Laskin

Misha Laskin, co-founder of Reflection AI and former researcher at Google DeepMind, discusses the company's mission to build superhuman autonomous systems. He introduces Asimov, a code comprehension agent designed to solve the 80% of an engineer's time spent on understanding complex systems, rather than just code generation. Laskin delves into the intricacies of co-designing product and research, the critical role of customer-driven evaluations, the bottlenecks in scaling reinforcement learning (RL) — particularly the "reward problem" — and why he believes the future is one of "jagged superintelligence" emerging in specific, high-value domains like coding.