Ai security

Mainframe modernization explained: COBOL and AI

Mainframe modernization explained: COBOL and AI

Experts from IBM discuss the nuanced role of AI in mainframe modernization, the immense infrastructural and product challenges behind global AI adoption, and the critical need for a multi-layered, security-by-design framework for the safe deployment of AI agents.

The #1 AI Agent on GitHub Was Never Read by Its Creator

The #1 AI Agent on GitHub Was Never Read by Its Creator

Jason Martin of HiddenLayer discusses the significant security vulnerabilities of OpenClaw, a viral open-source AI personal assistant. The analysis covers critical flaws like prompt injection, insecure defaults, and the potential for creating sophisticated botnets, offering key lessons for securing the next generation of autonomous AI agents.

Exploits of public-facing apps are surging. Why?

Exploits of public-facing apps are surging. Why?

A deep dive into the 2026 IBM X-Force Threat Intelligence Index, exploring the shift to exploiting public-facing applications, the rise of AI agent-related threats, critical AI infrastructure flaws, and the need for a more human-centric approach to threat intelligence.

India's USD $200B AI hub & Claude builds C compiler

India's USD $200B AI hub & Claude builds C compiler

Experts from IBM discuss Google's $200B AI investment in India, Claude's autonomous C compiler creation, the significant security risks in AI agent skills, and the looming AI ROI problem facing IT leaders, debating the shift from per-token to value-based pricing.

Guide to Architect Secure AI Agents: Best Practices for Safety

Guide to Architect Secure AI Agents: Best Practices for Safety

AI agents offer immense power but come with significant security risks. This guide outlines a comprehensive architecture for securing AI agents using DevSecOps, robust access controls, threat monitoring, and a principle-of-least-privilege approach to mitigate dangers like prompt injection and data leaks.

AI Privilege Escalation: Agentic Identity & Prompt Injection Risks

AI Privilege Escalation: Agentic Identity & Prompt Injection Risks

Grant Miller explains how malicious actors exploit AI systems through privilege escalation, using techniques like prompt injection to compromise over-permissioned AI agents. The summary covers key mitigation strategies, including the principle of least privilege, robust access governance, dynamic context-based access, and continuous monitoring to secure agentic systems.