Prompt injection

Zero Trust for Multi-Agent Systems // Surendra Narang | Venkata Gopi Kolla

Zero Trust for Multi-Agent Systems // Surendra Narang | Venkata Gopi Kolla

Explore the security challenges of Multi-Agent Systems (MAS) and learn how to apply Zero Trust principles to mitigate risks like prompt injection, privilege escalation, and data leakage. This summary details a reference architecture and practical strategies for building secure, autonomous systems.

Security & AI Governance: Reducing Risks in AI Systems

Security & AI Governance: Reducing Risks in AI Systems

The video explains the distinct but complementary roles of AI governance and security in mitigating AI risks. It contrasts their focuses, from self-inflicted policy violations (governance) to intentional external attacks (security), and proposes a layered framework combining both for comprehensive protection.

Safety and security for code executing agents — Fouad Matin, OpenAI (Codex, Agent Robustness)

Safety and security for code executing agents — Fouad Matin, OpenAI (Codex, Agent Robustness)

Fouad Matin from OpenAI's Agent Robustness and Control team discusses the critical safety and security challenges of code-executing AI agents. He explores the shift from models that *can* execute code to defining what they *should* be allowed to do, presenting practical safeguards like sandboxing, network control, and human review, drawing from OpenAI's experience building Code Interpreter and the open-source Code Interpreter CLI.