News

Critical flaw in Cursor AI editor let attackers execute remote code via Slack and GitHub—fixed in v1.3 update.
EchoLeak shows that enterprise-grade AI isn’t immune to silent compromise, and securing it isn’t just about patching layers. “AI agents demand a new protection paradigm,” Garg said.
The bug was assigned the CVE-2025-32711 identifier, and was given a severity score of 9.3/10 (critical). It was fixed server-side in May, meaning users don’t need to do anything. Microsoft also ...
The vulnerability, dubbed EchoLeak and assigned the identifier CVE-2025-32711, could have allowed hackers to mount an attack without the target user having to do anything. EchoLeak represents the ...
EchoLeak should be viewed as a wake-up call for a society that is embracing AI integration wholeheartedly. In a rush to implement agentic AI, we can’t keep up with the need to secure it.
But EchoLeak, as detailed by Fortune, shows that trusting an AI with context is not the same as controlling it. The line between helpful and harmful isn’t always drawn in code, it’s drawn in ...
The “EchoLeak,” as the security flaw is known, is the first known AI security vulnerability that doesn’t require users to click a link to become infected.
But, as the report by Fortune suggests, the vulnerability had a name, EchoLeak, and behind it, a sobering truth: hackers had figured out how to manipulate an AI assistant into leaking private data ...