On December 25th, 2020, the United Kingdom’s Channel 4 televised a deepfake video of Queen Elizabeth II delivering her annual Christmas message. Although this might initially appear to be a humorous form of entertainment, Channel 4 stated that their intention in releasing the fake was to provide a “stark warning” of the power of fake news. Whether a video is deepfake or shallowfake, viewers quickly skimming a video with bad editing and minor digital manipulations are often unable to detect what is real and what is not. As we enter 2021, we are likely to see more misinformation in the mainstream and a surge in the use of manipulated media.
In July 2018, Senator Marco Rubio claimed that fake videos could “throw our country into tremendous crisis internally and weaken us deeply.” While Tim Hwang, director of the Ethics and Governance of AI Initiative, disagrees with Mr. Rubio’s alarmist tone, Hwang acknowledged that deepfake videos “raise a lot of questions.”
Professor Lilian Edwards, an expert in internet law, agrees that deepfakes’ biggest risk is not that they can create a fake version of reality, but rather that reality itself may now become deniable. Donald Trump has already relied on this method to suggest that actual events have not occurred, and, in truth, are fake videos manufactured by his political enemies. This risk of distorting reality not only affects government officials, but also actively encourages the general public to mistrust both mainstream media and government institutions. As recently as January 7th, 2021, there was internet speculation that President Trump’s address to the nation was in fact a deepfake video. Given the potential that a virtual background of the Oval Office was used, suggesting digital manipulation, it therefore became possible for viewers to suggest that the official video was concocted and digitally generated, fuelling mistrust in traditional institutions.
Digital companies such as Microsoft, Facebook, and Amazon have sought to combat Deepfake videos by launching the “Deepfake Detection Challenge,” aimed at attracting global researchers who specialize in countering media manipulation. Microsoft has also introduced online tools to spot deepfakes, with the hope of combatting misinformation-spreading software. However, although these steps are commendable, rapid advances in deepfake technology quickly make these digital tools outdated and redundant.
While it is still highly unlikely that deepfakes will pose an immediate risk, either to government institutions or individuals, we must also recognize threat they pose. We must continue to invest in countering digital misinformation, encouraging tech companies to develop the tools to detect deepfakes and prevent them from entering the mainstream.