Safety and security for code executing agents — Fouad Matin, OpenAI (Codex, Agent Robustness)
Fouad Matin from OpenAI's Agent Robustness and Control team discusses the critical safety and security challenges of code-executing AI agents. He explores the shift from models that *can* execute code to defining what they *should* be allowed to do, presenting practical safeguards like sandboxing, network control, and human review, drawing from OpenAI's experience building Code Interpreter and the open-source Code Interpreter CLI.