Ai security

Beyond phishing: Cyber threats in the age of AI with Four Flynn (pt. 1)

Beyond phishing: Cyber threats in the age of AI with Four Flynn (pt. 1)

VP of Security and Privacy at Google DeepMind, Four Flynn, discusses the landmark 'Operation Aurora' cyberattack, the 'defender's dilemma,' and how AI is now being used both to create novel threats and to build a new generation of defenses to find and automatically patch software vulnerabilities.

Ex-DeepMind: How To Actually Protect Your Data From AI

Ex-DeepMind: How To Actually Protect Your Data From AI

Dr. Ilia Shumailov, former DeepMind AI Security Researcher, explains why traditional security fails for AI agents. He details the unique threat model of agents, the dangers of supply chain attacks and architectural backdoors, and proposes a system-level solution called CAML to enforce security policies by design, separating model reasoning from data execution.

The AI vulnerability apocalypse, a new strain of Petya and dumb cybersecurity rules

The AI vulnerability apocalypse, a new strain of Petya and dumb cybersecurity rules

Panelists debate the likelihood of an "AI vulnerability cataclysm", discussing whether AI will overwhelm defenses or if it's an arms race where both attackers and defenders level up. The discussion covers the return of threat group Scattered Spider using AI-powered vishing, the persistent and significant risks of cloud misconfigurations, the emergence of firmware-level ransomware like HybridPetya, and the importance of focusing on security fundamentals and user education over punitive rules.

Trust at Scale: Security and Governance for Open Source Models // Hudson Buzby // MLOps Podcast #338

Trust at Scale: Security and Governance for Open Source Models // Hudson Buzby // MLOps Podcast #338

Hudson Buzby from JFrog discusses the critical security, governance, and legal challenges enterprises face when adopting open-source AI models. He highlights the risks lurking in repositories like Hugging Face and argues for a centralized, curated AI gateway as the essential framework for enabling safe, scalable, and cost-effective AI development.

Zero Trust for Multi-Agent Systems // Surendra Narang | Venkata Gopi Kolla

Zero Trust for Multi-Agent Systems // Surendra Narang | Venkata Gopi Kolla

Explore the security challenges of Multi-Agent Systems (MAS) and learn how to apply Zero Trust principles to mitigate risks like prompt injection, privilege escalation, and data leakage. This summary details a reference architecture and practical strategies for building secure, autonomous systems.

Security & AI Governance: Reducing Risks in AI Systems

Security & AI Governance: Reducing Risks in AI Systems

The video explains the distinct but complementary roles of AI governance and security in mitigating AI risks. It contrasts their focuses, from self-inflicted policy violations (governance) to intentional external attacks (security), and proposes a layered framework combining both for comprehensive protection.