Researchers Warn AI Can Clone Voices Within Minutes, Increasing Risk of Vishing Attacks

ยอดเข้าชม: 463 views

380/68 Thursday, October 2, 2025

Researchers from NCC Group have unveiled a new framework that leverages AI to clone a person’s voice in real time using only a few minutes of original audio samples. This advancement significantly boosts the credibility and realism of Vishing (voice phishing) attacks, putting organizations, employees, and individuals at higher risk of losing personal, financial, and sensitive corporate information.

The study notes that traditional deepfake voice technologies-such as text-to-speech systems or pre-recorded voice samples—suffer from limitations in realism and response latency, making them less effective in real-time conversations. However, the framework demonstrated by NCC Group enables near-instantaneous conversion of a speaker’s voice into that of a target individual, resulting in natural and convincing conversations. Researchers trained the model using only a few hours of publicly available audio before testing it in live organizational settings, where it successfully tricked employees into disclosing confidential information.

NCC Group warns that AI-powered Vishing will become increasingly difficult to detect and more convincing, potentially being used in large-scale fraud campaigns impersonating high-profile individuals or targeting specific organizations. To mitigate risks, organizations are advised to enforce multi-factor authentication (MFA), train staff to scrutinize unusual requests even if the voice sounds familiar, and use secondary verification methods such as security codes or alternative communication channels. Additionally, limiting the public availability of executives’ and employees’ voice recordings can reduce the likelihood of them being exploited for synthetic voice attacks.

Source https://www.darkreading.com/cyberattacks-data-breaches/ai-voice-cloning-vishing-risks