Voice Cloning a Growing Social Engineering Threat

Currently, when people think of social engineering attacks, they immediately think of email phishing. This is because for years now email phishing has been the preferred attack method employed by attackers to gain access to user computers and hence into private internal computer networks. But we all should remember that email phishing is only one type of social engineering attack method; there are many. Social engineering can also include such vectors as snail-mail spoofing, removeable media spoofing, SMS spoofing, blackmail, intimidation, in-person impersonation …and phone impersonation, which brings us to the subject of this blog: voice cloning.

Years ago, I wrote a blog about the dangers posed by digital recording of images and sound; about the fact that perfect fake digital recordings could be generated at will given the proper amount of computing power and expertise. How could we then fully trust security cameras and voice recordings to reflect reality? The answer was and is we can’t.

Now, thanks to AI technology, we have convincing fake voices being generated in real time! One little sample of a person’s speech and, like a parrot, the computer is immediately able to impersonate the voice. The implications of this technology are staggering to the world of information security management, especially when one considers the next stage in this technology which is to perfectly replicate both the voice and the moving images of a person in real time.

We haven’t been able to trust that users who sign into a network or service are really who they purport to be since networks began, but now we can’t even trust a phone call from somebody whose voice we know very well. This capability has not escaped the notice of cybercriminals. They are already using voice cloning to convince people to reveal private information or to allow them access to private systems with great success.

So how are we supposed to respond to this new threat? First, I would be sure to make personnel aware of the threat. Include voice cloning in your regular information security and awareness training mechanisms. Put up a warning on your security Slack channel and on posters, and include voice impersonation in your phishing training modules. Develop procedures for addressing the dangers of voice cloning and write them into policy. You can also use AI to battle AI. Employ AI-based software that can monitor audio to identify digital noise, signs of repetition or artifacts that are not present in a live voice. The worst thing you can do is ignore this threat and do nothing, so why not be proactive and get ahead of the threat now?