“ShadowLeak” Vulnerability Exposes Gmail Data via ChatGPT Deep Research

ยอดเข้าชม: 199 views

356/68 Monday, September 22, 2025

Researchers from Radware have disclosed a zero-click vulnerability in ChatGPT’s Deep Research function, named ShadowLeak, which allowed attackers to extract data from Gmail inboxes simply by sending an email containing hidden malicious instructions – without requiring any clicks or interaction from the victim. The attack leveraged text that was almost invisible, such as very small fonts or white text on a white background, so that the instructions would still be parsed by the agent while remaining unnoticed by the user. The issue was reported to OpenAI on June 18, 2025, and was patched in early August.

The attack exploited an Indirect Prompt Injection mechanism triggered when users instructed Deep Research to analyze emails in their inbox. If an email contained hidden instructions, the agent would execute them and exfiltrate data to an external server via the browser.open() function. Sensitive data such as Personally Identifiable Information (PII) was Base64-encoded before being sent. This technique was more severe than previous prompt injection cases because the data leakage originated from OpenAI’s cloud infrastructure, bypassing detection by end users or organizations.

Although the vulnerability has been patched, researchers warn that ShadowLeak-style attacks could extend to other third-party integrations supported by ChatGPT, such as Google Drive, Dropbox, GitHub, Outlook, and SharePoint, significantly increasing the data exposure risk. Meanwhile, the SPLX platform demonstrated how prompt engineering and context poisoning could trick ChatGPT agents into solving CAPTCHAs if the conversational context was manipulated. The incident underscores the critical importance of maintaining context integrity and performing continuous red teaming to address these emerging AI-driven threats.

Source https://thehackernews.com/2025/09/shadowleak-zero-click-flaw-leaks-gmail.html