Prompt injection

Enterprise-ready MCP // Jiquan Ngiam

Enterprise-ready MCP // Jiquan Ngiam

Jiquan Ngiam, CEO of MintMCP, discusses the paradigm shift from static programs to dynamic AI agents, outlining the significant security risks involved—supply chain vulnerabilities, third-party data poisoning, and inadvertent agent behaviors—and presents a three-pronged strategy for enterprise readiness: comprehensive monitoring, preventative guardrails, and secure, role-based deployment of Model Context Protocols (MCPs).

The #1 AI Agent on GitHub Was Never Read by Its Creator

The #1 AI Agent on GitHub Was Never Read by Its Creator

Jason Martin of HiddenLayer discusses the significant security vulnerabilities of OpenClaw, a viral open-source AI personal assistant. The analysis covers critical flaws like prompt injection, insecure defaults, and the potential for creating sophisticated botnets, offering key lessons for securing the next generation of autonomous AI agents.

Time to become a hacker // Matt Sharp

Time to become a hacker // Matt Sharp

In this talk, Matt Sharp explains that while 2025 is the year of AI agents, it's also the year of cybercrime. The rush to create frictionless, user-friendly agents has led to a neglect of fundamental security principles, creating a perfect environment for hackers who are now using these same powerful AI tools to innovate and scale their attacks.

MCP Security: The Exploit Playbook (And How to Stop Them)

MCP Security: The Exploit Playbook (And How to Stop Them)

Vitor, co-founder of Runlayer and former tech lead for Zapier Agents, provides a deep dive into the security vulnerabilities of the rapidly adopted MCP standard for AI agents. He outlines the primary attack vectors, including sophisticated prompt injections, supply chain attacks like 'rug-pulls', and tool schema manipulation, using real-world exploits as examples. The talk concludes with a multi-layered defensive strategy for users, developers, and enterprises to secure their AI agent deployments.

Guide to Architect Secure AI Agents: Best Practices for Safety

Guide to Architect Secure AI Agents: Best Practices for Safety

AI agents offer immense power but come with significant security risks. This guide outlines a comprehensive architecture for securing AI agents using DevSecOps, robust access controls, threat monitoring, and a principle-of-least-privilege approach to mitigate dangers like prompt injection and data leaks.

AI Privilege Escalation: Agentic Identity & Prompt Injection Risks

AI Privilege Escalation: Agentic Identity & Prompt Injection Risks

Grant Miller explains how malicious actors exploit AI systems through privilege escalation, using techniques like prompt injection to compromise over-permissioned AI agents. The summary covers key mitigation strategies, including the principle of least privilege, robust access governance, dynamic context-based access, and continuous monitoring to secure agentic systems.