Prompt injection

A new take on bug bounties, AI red teams and our New Year’s resolutions

A new take on bug bounties, AI red teams and our New Year’s resolutions

IBM's Security Intelligence podcast discusses key cybersecurity trends for 2026, including the shift to operational resilience, Microsoft's expanded bug bounty for third-party code, the long-tail impact of the LastPass breach, OpenAI's use of AI for automated red teaming against prompt injections, and the commercialization of ClickFix attacks.

Is ChatGPT Atlas safe? Plus: invisible worms, ghost networks and the AWS outage

Is ChatGPT Atlas safe? Plus: invisible worms, ghost networks and the AWS outage

A discussion on the security risks of new AI browsers like ChatGPT Atlas, the rise of malware distribution through trusted platforms like YouTube, the emergence of "post-infrastructure" malware like GlassWorm, corporate negligence in mobile security, and the critical lessons in resiliency from the recent AWS outage.

Ex-DeepMind: How To Actually Protect Your Data From AI

Ex-DeepMind: How To Actually Protect Your Data From AI

Dr. Ilia Shumailov, former DeepMind AI Security Researcher, explains why traditional security fails for AI agents. He details the unique threat model of agents, the dangers of supply chain attacks and architectural backdoors, and proposes a system-level solution called CAML to enforce security policies by design, separating model reasoning from data execution.

How to scam an AI agent, DDoS attack trends and busting cybersecurity myths

How to scam an AI agent, DDoS attack trends and busting cybersecurity myths

A discussion on novel methods for hijacking AI agents through social engineering, the evolution of DDoS attacks, the legacy of Zero Trust, and the glaring security flaws in AI training data apps.

Zero-Click Attacks: AI Agents and the Next Cybersecurity Challenge

Zero-Click Attacks: AI Agents and the Next Cybersecurity Challenge

Explores the mechanics of zero-click attacks, which require no user interaction, and details how the integration of autonomous AI agents can amplify these threats. The summary covers historical examples like Pegasus and proposes a multi-layered defense strategy, including AI firewalls, the principle of least privilege, and a zero-trust architecture.

Zero Trust for Multi-Agent Systems // Surendra Narang | Venkata Gopi Kolla

Zero Trust for Multi-Agent Systems // Surendra Narang | Venkata Gopi Kolla

Explore the security challenges of Multi-Agent Systems (MAS) and learn how to apply Zero Trust principles to mitigate risks like prompt injection, privilege escalation, and data leakage. This summary details a reference architecture and practical strategies for building secure, autonomous systems.