Ai security

Handling AI-Generated Code: Challenges & Best Practices • Roman Zhukov & Damian Brady

Handling AI-Generated Code: Challenges & Best Practices • Roman Zhukov & Damian Brady

Roman Zhukov (Red Hat) and Damian Brady (GitHub) explore the evolving landscape of AI-assisted software development, discussing its impact on developer workflows, code quality, security, and the future of developer roles. They emphasize that while AI tools are powerful amplifiers, human oversight remains essential for quality, security, and legal compliance.

Securing AI Agents with Zero Trust

Securing AI Agents with Zero Trust

This post explores how to secure modern agentic AI systems by applying the core principles of Zero Trust. It details the unique attack surfaces of AI agents, such as prompt injection and model poisoning, and outlines a comprehensive security architecture including non-human identity management, AI firewalls, and the critical role of human oversight.

Codex launch & OpenClaw/Moltbook chaos: This week in AI agents

Codex launch & OpenClaw/Moltbook chaos: This week in AI agents

The panel discusses OpenAI's new Codex application, framing it as a necessary 'table stakes' move in the competitive AI coding agent market rather than a game-changer. The conversation pivots to the importance of agent orchestration as the next frontier for value creation and monetization. They also explore the Moltbook (OpenClaw) phenomenon—a social network for AI agents—debating whether it's a valuable sociological experiment or a mere novelty, while highlighting the significant security vulnerabilities and practical hurdles it exposes.

Securing & Governing Autonomous AI Agents: Risks & Safeguards

Securing & Governing Autonomous AI Agents: Risks & Safeguards

Experts Jeff Crume and Josh Spurgin explore the critical security and governance challenges posed by autonomous AI agents. They detail common threats like prompt injection, data poisoning, and model theft, and discuss governance issues such as bias, transparency, and accountability, providing a set of actionable safeguards to build secure, trustworthy, and compliant AI systems.

MCP Security: What Happens When Your Agents Talk to Everything?

MCP Security: What Happens When Your Agents Talk to Everything?

A deep dive into the security vulnerabilities of Multi-Context Protocol (MCP) for AI agents. The talk explores how identity loss, "all-or-nothing" permissions, and disappearing audit trails create significant attack surfaces, and presents solutions like identity chain tracking, context-aware permissions, and intelligent auditing to secure agent-to-tool communication.

Hacking AI Systems: How to (Still) Trick Artificial Intelligence • Katharine Jarmul • GOTO 2025

Hacking AI Systems: How to (Still) Trick Artificial Intelligence • Katharine Jarmul • GOTO 2025

To build secure AI systems, we must first learn to break them. Katharine Jarmul explores the landscape of adversarial AI, detailing how attackers exploit fundamental weaknesses in deep learning models—from poisoned training data and overparameterization to the attention mechanism itself. This talk provides a practical taxonomy of attacks and a primer on building robust defenses.