Prompt injection

Securing AI Agents with Zero Trust

Securing AI Agents with Zero Trust

This post explores how to secure modern agentic AI systems by applying the core principles of Zero Trust. It details the unique attack surfaces of AI agents, such as prompt injection and model poisoning, and outlines a comprehensive security architecture including non-human identity management, AI firewalls, and the critical role of human oversight.

Securing & Governing Autonomous AI Agents: Risks & Safeguards

Securing & Governing Autonomous AI Agents: Risks & Safeguards

Experts Jeff Crume and Josh Spurgin explore the critical security and governance challenges posed by autonomous AI agents. They detail common threats like prompt injection, data poisoning, and model theft, and discuss governance issues such as bias, transparency, and accountability, providing a set of actionable safeguards to build secure, trustworthy, and compliant AI systems.

Securing AI Agents

Securing AI Agents

Jason Martin of Permiso Security discusses the exponential rise of AI agents in enterprises and the urgent security challenges they present. He covers the concept of Non-Human Identity (NHI), applying Zero Trust principles to ephemeral and over-permissioned agents, and outlines key attack vectors like prompt injection and data poisoning, while also exploring the potential of defensive AI to enhance security operations.

Ransomware whack-a-mole, AI agents as insider threats and how to hack a humanoid robot

Ransomware whack-a-mole, AI agents as insider threats and how to hack a humanoid robot

A discussion on the evolving cybersecurity landscape, covering the persistent threat of ransomware gangs adapting with AI, the critical failures in identity security highlighted by the Zestix case, the emergence of AI agents as a new class of insider threats, and the physical-world risks demonstrated by hacking humanoid robots.

Hacking AI Systems: How to (Still) Trick Artificial Intelligence • Katharine Jarmul • GOTO 2025

Hacking AI Systems: How to (Still) Trick Artificial Intelligence • Katharine Jarmul • GOTO 2025

To build secure AI systems, we must first learn to break them. Katharine Jarmul explores the landscape of adversarial AI, detailing how attackers exploit fundamental weaknesses in deep learning models—from poisoned training data and overparameterization to the attention mechanism itself. This talk provides a practical taxonomy of attacks and a primer on building robust defenses.

A new take on bug bounties, AI red teams and our New Year’s resolutions

A new take on bug bounties, AI red teams and our New Year’s resolutions

IBM's Security Intelligence podcast discusses key cybersecurity trends for 2026, including the shift to operational resilience, Microsoft's expanded bug bounty for third-party code, the long-tail impact of the LastPass breach, OpenAI's use of AI for automated red teaming against prompt injections, and the commercialization of ClickFix attacks.