Can you believe what you see?
The development of faster computing and machine learning has brought us into a time where video evidence is no longer a medium safe from invisible manipulation; just as the development of Photoshop changed how we think about photographs, deepfake technology is now available to the entire world on a consumer level.
So, can you believe what you see at all anymore?
The answer is a little complicated.
All of these images are captured from a video of the same individual that utilizes deepfake technology. If you had never seen any of these men before, could you tell which one was real?
Could you identify the fake if you were only shown one of the images?
Propaganda
Image: Deepfake of Zelenskyy Tells Ukrainian Troops to ‘Surrender’ |
The image on the left is captured from a deepfaked video of Ukrainian President Volodymyr Zelenskyy calling on Ukrainian troops to surrender. Following the release of this video, there was panic among some Ukrainian citizens and military members. The president was forced to immediately issue a statement addressing the deepfake. He reassured the public that surrender was not an option, and that any future claims to the contrary are fake as well.
This was one of the first examples of deepfake technology being used in an effective manner to attempt to influence the outcome of a major event. Had this been a more believable fake statement or not addressed promptly, this could have resulted in significant damage.
Beyond Ukraine
The utilization of deepfake technology as an instrument for propaganda has implications that extend far past the borders of Ukraine. The Ukrainian population has widespread access to the internet, smartphones, computers, and technology. This issue was able to be addressed so quickly and effectively solely due to that.
What would happen if a deepfake was distributed in a community or country that has limited digital access? What would happen if it showed their leader calling them to arms? What about if it claimed the government had been overthrown, or dissolved? What if it was used to push extremist groups into committing acts of violence by impersonating their leader?
The possibilities are as endless as they are concerning.
The Future
Is legislation the answer to the difficult questions that have arisen?
Unfortunately, the time for creating legal policies around deepfake technology has passed. Machine learning is now in the hands of anyone with a smartphone and internet access. While we may be able to slow progress or reduce the amount of deepfakes being released, entities or individuals with bad intentions will have no regard for the United States penal code.
Regardless of your opinion on the implications of deepfake technology, there is no finish line for machine learning. Recent use examples are just the first ones that have been good enough to be entirely believable. They will only continue to get better and become even harder to discern from reality; so, where might the end be? Will recorded video eventually become indistinguishable from actual events? If so, will people be able to simply deny anything they've said on camera in the past as a deepfake? What implications would that have on society and public opinion?
...how long do we have to answer these questions?
No comments:
Post a Comment