Ai security

Ex-DeepMind: How To Actually Protect Your Data From AI

Ex-DeepMind: How To Actually Protect Your Data From AI

Dr. Ilia Shumailov, former DeepMind AI Security Researcher, explains why traditional security fails for AI agents. He details the unique threat model of agents, the dangers of supply chain attacks and architectural backdoors, and proposes a system-level solution called CAML to enforce security policies by design, separating model reasoning from data execution.

The AI vulnerability apocalypse, a new strain of Petya and dumb cybersecurity rules

The AI vulnerability apocalypse, a new strain of Petya and dumb cybersecurity rules

Panelists debate the likelihood of an "AI vulnerability cataclysm", discussing whether AI will overwhelm defenses or if it's an arms race where both attackers and defenders level up. The discussion covers the return of threat group Scattered Spider using AI-powered vishing, the persistent and significant risks of cloud misconfigurations, the emergence of firmware-level ransomware like HybridPetya, and the importance of focusing on security fundamentals and user education over punitive rules.

Trust at Scale: Security and Governance for Open Source Models // Hudson Buzby // MLOps Podcast #338

Trust at Scale: Security and Governance for Open Source Models // Hudson Buzby // MLOps Podcast #338

Hudson Buzby from JFrog discusses the critical security, governance, and legal challenges enterprises face when adopting open-source AI models. He highlights the risks lurking in repositories like Hugging Face and argues for a centralized, curated AI gateway as the essential framework for enabling safe, scalable, and cost-effective AI development.

Zero Trust for Multi-Agent Systems // Surendra Narang | Venkata Gopi Kolla

Zero Trust for Multi-Agent Systems // Surendra Narang | Venkata Gopi Kolla

Explore the security challenges of Multi-Agent Systems (MAS) and learn how to apply Zero Trust principles to mitigate risks like prompt injection, privilege escalation, and data leakage. This summary details a reference architecture and practical strategies for building secure, autonomous systems.

Security & AI Governance: Reducing Risks in AI Systems

Security & AI Governance: Reducing Risks in AI Systems

The video explains the distinct but complementary roles of AI governance and security in mitigating AI risks. It contrasts their focuses, from self-inflicted policy violations (governance) to intentional external attacks (security), and proposes a layered framework combining both for comprehensive protection.

The $10 Trillion AI Revolution: Why It’s Bigger Than the Industrial Revolution

The $10 Trillion AI Revolution: Why It’s Bigger Than the Industrial Revolution

Sequoia Capital's Konstantine Buhler presents an investment thesis on the AI-driven "Cognitive Revolution," framing it as a transformation larger and faster than the Industrial Revolution. The core of the thesis is the $10 trillion opportunity in automating the US services market and the shift in work from certainty to high leverage. Buhler outlines five current investment trends, including real-world validation over academic benchmarks and compute as the new production function, and five future themes Sequoia is betting on, such as persistent memory, AI-to-AI communication, and AI security.