Deepfakes and AI: Ready for Cybercrime Prime Time?

From intel471.com

The buzz around OpenAI’s chatbot, ChatGPT, along with advances in artificial intelligence (AI) and machine learning (ML) have prompted interest into how the technologies could be maliciously used. Intel 471 has seen increasing interest in “deepfake” production services advertised on underground forums. Deepfakes are images, audio and video clips that have been synthetically produced. Deepfakes can pose serious harm, from misinformation to fraud to harassment. Threat actors see potential in this. To understand how they can weaponize deepfakes to exploit new attack vectors, security teams first must be aware of the underlying technology’s limitations. Intel 471 analyzed deepfake services to see what’s on offer, what threats the services may pose and what lies ahead.

What is a Deepfake?

The term “deepfake” is an amalgamation of “deep learning” and “fake.” Deepfakes are defined as realistic synthetic imagery or audio created using ML. This technology is leveraged to augment or substitute the likeness of a human with realistic computer-generated content. Deepfakes are more convincing than traditional photo or video editing because they leverage sophisticated ML techniques such as generative adversarial networks (GANs). GANs work by pitting one AI application against another – the first is the generative network that creates a deepfake image based on a set of parameters, and the second is the discriminative network that compares that image to a real-life image and tries to identify which is the fake. The first AI then tries to improve the fake such that the discriminative network accepts it as real. The quality of video deepfakes widely varies, but some of the best – take the Tom Cruise ones – show that there’s a future ahead where it becomes exceedingly difficult for the human eye to tell the truth from AI-generated fiction.

Read more…