Security

Six Years of Rowhammer: Breakthroughs and Future Directions

Six Years of Rowhammer: Breakthroughs and Future Directions

Stefan Saroiu from Microsoft Research details Project STEMA's six-year journey tackling the DRAM security flaw, Rowhammer. He discusses how academic research kept the industry honest about DDR4 vulnerabilities, the development of their in-DRAM defense, Panopticon, and its evolution into the industry standard PRAC for DDR5, while highlighting that significant challenges and research opportunities remain.

Building Secure ReactJS Apps: Mastering Advanced Security Techniques • Jim Manico • GOTO 2024

Building Secure ReactJS Apps: Mastering Advanced Security Techniques • Jim Manico • GOTO 2024

A deep dive into ReactJS security, this presentation reframes the discussion around leveraging AI for secure code generation. It argues that by creating detailed, specific security prompts, developers can train AI to be an expert security coder, transforming it from a flawed tool into a powerful ally for building robust and secure applications.

The Unofficial Guide to Apple’s Private Cloud Compute - Jonathan Mortensen, CONFSEC

The Unofficial Guide to Apple’s Private Cloud Compute - Jonathan Mortensen, CONFSEC

A technical deep dive into Apple's Private Cloud Compute (PCC), exploring its novel architecture for running sensitive AI workloads with cryptographic privacy guarantees. The talk covers the core requirements, key components like remote attestation and transparency logs, and how these concepts can be applied by developers today.

How we hacked YC Spring 2025 batch’s AI agents — Rene Brandel, Casco

How we hacked YC Spring 2025 batch’s AI agents — Rene Brandel, Casco

A security analysis of YC AI agents reveals that the most critical vulnerabilities are not in the LLM itself, but in the surrounding infrastructure. This breakdown of a red teaming exercise, where 7 out of 16 agents were compromised, highlights three common and severe security flaws: cross-user data access (IDOR), remote code execution via insecure sandboxes, and server-side request forgery (SSRF).

Safety and security for code executing agents — Fouad Matin, OpenAI (Codex, Agent Robustness)

Safety and security for code executing agents — Fouad Matin, OpenAI (Codex, Agent Robustness)

Fouad Matin from OpenAI's Agent Robustness and Control team discusses the critical safety and security challenges of code-executing AI agents. He explores the shift from models that *can* execute code to defining what they *should* be allowed to do, presenting practical safeguards like sandboxing, network control, and human review, drawing from OpenAI's experience building Code Interpreter and the open-source Code Interpreter CLI.