09/69 Wednesday, January 7, 2026

Security researchers from Zenity Labs have warned about potential security risks associated with Anthropic’s “Claude in Chrome” extension, which enables the AI to directly browse websites, fill out forms, and interact with web applications on behalf of users. Because the extension remains logged in at all times, Claude effectively gains ongoing access to user accounts and systems-such as Google Drive, Slack, or internal enterprise tools-under the user’s identity.
A key concern is that Claude has permissions to access and modify data and can be influenced by content encountered on the web. This creates opportunities for Indirect Prompt Injection attacks, where malicious instructions are embedded into web pages, potentially causing the AI to delete files, send messages, or extract tokens from web requests and console logs. Researchers demonstrated that Claude could be tricked into executing JavaScript code, effectively turning it into a form of “XSS-as-a-service,” allowing attackers to perform actions using the victim’s privileges.
Although Anthropic provides an “Ask before acting” option that requires user confirmation before the AI performs actions, researchers noted that this is largely a policy-based safeguard. There remains a risk that the AI could act beyond the scope of what users intended to approve, especially as users may become accustomed to repeatedly granting permissions without closely reviewing the details. Zenity Labs emphasized that organizations must design security controls that account for both the permissions granted to AI and the ability of AI to act on behalf of users within the browser environment.
Source https://hackread.com/data-exposure-risk-claude-chrome-extension/
