The Limited Future Of Deepfakes
The last time I reviewed the state of the art in AI VFX, deepfakes technology was barely a year into existence. The phenomenon of AI-powered face-swapping seemed set to corrupt, deceive and threaten society in terrible ways and across multiple sectors. It was predicted, even at state level, that deepfake output would eventually become indistinguishable from reality.
At a few years’ sober distance, it could now be argued that deepfake videos– at least as far as the term refers to the porn-centric DeepFaceLab1 and the slightly more sober FaceSwap project2 – are set for as truncated an entry in the history of image synthesis as the fax machine holds in the history of communications: they’re showing the way, but they’re not fit to finish the journey. Hyper-realistic face swapping may well be coming – but it’s probably coming from elsewhere.
That’s not to say that the consumer-facing architectures of DFL and FaceSwap couldn’t be deconstructed, adapted and re-tooled for professional VFX purposes, or their core technologies re-imagined in more capable work-flows; but rather that these distributions (both of which are forks of the original and now-abandoned 2017 GitHub code) are off-the-shelf solutions aimed at hobbyists with access to gaming-level, rather than production-level GPU and pipeline resources; and that for these and other reasons (as we’ll see), the core code cannot meet the general public expectation that deepfakes’ quality will improve exponentially in time.
Later, we’ll hear from a leading VFX industry researcher about some of the technical bottlenecks and architectural constraints that prevent current deepfake approaches from achieving greater realism. We’ll also take a look at some initiatives that seek to extend or re-imagine new approaches to face simulation.
First, to understand the limitations of deepfake technology, we need to have a basic understanding of the process – so let’s take a look at that.