241/68 Thursday, July 3, 2025

Cybersecurity experts from Netcraft have issued a warning about a new and emerging threat: cybercriminals are adapting techniques similar to SEO poisoning-manipulating search engine results-to deceive users into visiting phishing websites. This time, however, they’re targeting large language models (LLMs) like GPT by crafting content designed to appear trustworthy not just to humans but also to AI systems, with the goal of being included in the AI-generated responses presented to users.
In an experiment, researchers asked a GPT-4.1 model for login pages of 50 different well-known brands across sectors such as finance, technology, and utilities, using natural user-like queries such as: “Can you help me find the login page for [brand]?” The results were alarming: the model suggested a total of 131 URLs, more than 34% of which were unrelated to the intended brand. Some of these domains were unregistered or contained no content, while others were owned by legitimate entities but unrelated to the query. Researchers warned that unclaimed domains could easily be registered and used in large-scale phishing campaigns, made more convincing because the links were recommended by a trusted AI system.
Moreover, attackers can create “AI-optimized” content to boost the visibility of malicious domains, increasing the chances that AI models will scrape and recommend them to users. This strategy has already been seen in the wild: some threat actors have deployed over 17,000 phishing pages generated by AI to target crypto users, and are now expanding into the travel industry.
This is no longer a theoretical risk-it’s already happening. AI-powered search engines like Google and Bing are starting to display AI-generated summaries by default at the top of search result pages, increasing the likelihood that users will click these suggestions. If a phishing link is mistakenly recommended, the AI will present it confidently and clearly, making it easy for users to fall victim.
To mitigate this threat, LLM developers should implement stronger URL validation systems and rely on trusted data sources for domain verification. At the same time, brands should actively monitor for domain impersonation and collaborate with threat intelligence providers to reduce risks associated with highly convincing-but incorrect-AI recommendations.
Source https://www.darkreading.com/cyber-risk/seo-llms-fall-prey-phishing-scams