As generative Artificial Intelligence (AI) tools become more sophisticated and accessible, the threat posed by deepfake technology continues to pose a risk to privacy, security, and truth. Hyper-realistic falsifications created with advanced AI are steadily improving, but our tool kit for identifying these deepfakes is currently lacking. Blockchain, known for its decentralized and immutable nature, could offer a solution by helping to verify and even incentivize truth. This article delves into the deepfake dilemma, explores the mechanics of blockchain, and examines how this technology could help authenticate digital content.
Deepfakes and Blockchain: A Threat and A Possible Solution
Deepfakes have emerged as an unfortunate byproduct of advanced artificial intelligence technologies. By leveraging deep learning algorithms, seemingly realistic media can be created that often depicts public figures saying or doing things they never did. This technological capability undermines trust in digital communications, fuels misinformation, and helps facilitate cybercrimes such as identity theft or fraud. Recently, generative AI image technology has been used to generate fake images of Donald Trump being arrested, a falsehood that quickly went viral on Twitter. Another recent deepfaked image illustrated an explosion near the Pentagon, which caused a temporary dip in the stock market.
As deepfake media tools become increasingly accessible, the urgency to find effective countermeasures intensifies. Here is where the unique properties of blockchain technology may come into play. The use of distributed ledger models in blockchain, combined with its inherent immutability, ensures that recorded data cannot be altered retroactively. Additionally, its transparency enables users to trace and verify the authenticity of digital assets, providing a potential means to authenticate digital media.
Imagine if every piece of digital content came with its own blockchain record, a tamper-proof history detailing its creation, and any subsequent alterations. This could provide a powerful tool for distinguishing authentic digital content from deepfaked media.
Putting Blockchain to Work Against Deepfakes
In order to effectively employ blockchain technology in combating deepfake videos and other forms of media, it is essential to consider the entire lifecycle of a piece of media – from capture and editing to public release, and beyond. When applied correctly, blockchain technology can authenticate media at each stage of its lifecycle, thus providing a robust tool for the easy identification of potential deepfakes.
1. Attested Hardware
Attested hardware employs cryptography to sign audio/visual signals as they’re recorded, leaving an immutable digital mark. Media companies would log these devices onto a blockchain, thus providing a record of ownership. The public keyset belonging to the hardware can be used to authenticate the signatures tied to media generated by the device. Through this unique mix of cryptographic signatures, keys, and blockchain technology, the authenticity and origin of a media piece can be established. As a result, artificially created ‘deepfake’ content derived from the original media can be easily detected and disproven.
2. Editing Media
In the media production lifecycle, editing is an essential phase. This is where the raw audio and imagery are polished into a final product. Given the inherent nature of content containing digital signatures, simultaneously editing the content as well as retaining authenticity can become tricky.
The advantage of using attested hardware turns into a disadvantage during the editing process, as any alteration, even those made for legitimate reasons, would undermine the authenticity of the content. For example, improving the quality of a recording might require removing background noise or editing out irrelevant scenes. Likewise, it may be necessary to eliminate certain content that could breach the creator’s privacy. In these instances, regardless of the intent, the edits would trigger flags marking it as modified.
To remedy the situation, we look to cryptographic protocols such as Zero-Knowledge Succinct Non-Interactive Argument of Knowledge (zk-SNARKs) as a potential solution. These are proofs that allow one party to prove to another that they know a value x, without conveying any information apart from the fact that they know the value x.
In the context of media editing, zk-SNARKs can be utilized to authenticate edits made to the original content, thus extending the chain of trust. The edited content, accompanied by a zk-SNARK, would retain the proof of authenticity. For instance, if an editor removed background noise or irrelevant content, a zk-SNARK could validate that these were the only changes made, ensuring the content’s core authenticity while permitting necessary enhancements.
3. Public Release
Up until this point, the capturing and editing of media can be validated by clever implementations of cryptography and blockchain technology. Once media is ready for public release, blockchain technology can once again help with the authenticity of the media. The finalized content, complete with its chain of cryptographic proofs, can be hashed and written onto a blockchain. This creates an immutable record of the media as it was at the time of release.
Furthermore, the distributed nature of blockchain makes this record accessible to anyone who wishes to verify the authenticity of the media piece. Any future alterations to the media would result in a different hash value, making alterations detectable when compared with the original hash stored on the blockchain.
Blockchain Isn’t a One-Size-Fits-All Solution
While blockchain presents a promising proactive tool against the creation of deepfakes, it’s important to acknowledge its limitations. Primarily, the proactive approach detailed so far, relying on attested hardware, cryptographic protocols, and public blockchain records, can be utilized by media outlets, creators, and distributors aiming to verify their works’ authenticity. Yet, it doesn’t comprehensively address deepfakes that are created entirely from scratch or existing media.
Even as blockchain helps provide authenticated media, the increasing sophistication of deepfakes necessitates additional reactive solutions. Just as bacteria adapt to resist antibiotics, deepfake technologies are bound to evolve, potentially bypassing even the most rigorous authenticity checks. Herein lies further challenges.
Advancements in artificial intelligence might aid in managing this evolving threat. AI, initially exploited to create deepfakes, can ironically be trained to spot them. For example, Eleven Labs is responding to the misuse of their speech synthesis technology by releasing an AI speech classifier capable of detecting speech synthesized by their tool. By learning the intricate nuances that differentiate genuine speech from synthesized speech, this AI can effectively expose audio deepfakes created with their tool.
The landscape of deepfake generation and detection is vast and ever-changing, requiring an ecosystem of solutions. Collaboration between AI researchers, policymakers, tech companies, and the public is necessary to combat the complex issue. Policies and regulations can deter malicious use of deepfakes while tech literacy can empower the public to discern trustworthy sources from potentially deceptive ones.
What Does the Future Hold?
Artificial intelligence is revolutionizing all aspects of our digital lives, enhancing productivity and efficiency. Yet, it can also be weaponized for nefarious purposes, as evidenced by the rise of deepfakes. While blockchain offers a powerful proactive method to authenticate digital media, its implementation is just a piece of the puzzle. A comprehensive strategy, combining proactive blockchain solutions, reactive AI-based detection tools, legislative action, and public education, is essential to successfully counter the rising threat of deepfakes and preserve trust in our increasingly digital world. The battle against deepfakes isn’t about finding a one-size-fits-all solution; it’s about creating a resilient, multi-layered defence system ready to adapt and respond to the ever-evolving landscape of digital deceptions.