Carole Lyn Zeleny
cz812071@ohio.edu
Image by Atelier
The Future of Truth and Misinformation
Since 2016, the digital battlefield has become more complex and widespread around the world. False information about major events from the Covid-19 outbreak to the 2020 US election threatens public health and safety. False narratives overcome factual ones and spur beliefs and actions that have come increasingly violent. Pundits are divided on whether the next decade will see a drop in false and misleading narratives online. Those who predict improvement put their hopes in technological and social solutions. Others believe that the dark side of human nature is more than stifled by technology.
Image by: The Guardian
The Technologies that are Freeing Us are also Caging Us
New technologies have always presented some level of threat, either real or imaginary. In recent years, fake content has fueled the virality of biased inaccuracy to the extent that it has contributed directly to everything from measles outbreaks to market manipulation in crypto currencies, the rise of the alt-right, and the mainstreaming of conspiracy theories. Most significantly, the democratic outcomes in the 2016 US Presidential Elections and the Brexit Referendum in the United Kingdom. Our society and the opinions we hold are increasingly affected and even shaped by anonymous malicious actors who seek results or actions that may not be in our best interests. This is a distinctly contemporary threat.
Image by: The Guardian
Fake videos can now be created using a machine learning technique called a “generative adversarial network”, or a GAN. A graduate student introduced GANs in 2014 to algorithmically generate new data from existing data sets. In fact, a GAN scans thousands of photos in order to produce a new photo that slightly resembles the originals, creating a new photo that was never taken in the first place. GANs can also be used to generate new audio from existing audio, or new text from existing text. This machine learning technique was mostly limited to the AI research community until late 2017, when a Reddit user began posting digitally altered pornographic videos, he was using Google’s free open-source machine learning software, to superimpose celebrities’ faces on the bodies of women in pornographic movies. You can read more in this article posted by the Guardian.
No Market for the Truth
It comes down to motivation. There is currently not a market for the truth. The public isn’t motivated to seek out verified, vetted information. They are happy hearing what confirms their views. People can gain more creating fake information (both monetary and in notoriety) than they can keeping it from occurring. Avid users of social media systems like Facebook and Instagram are progressively creating ‘echo chambers’ of those who think alike. Additionally, they unfriend those with different ideas and opinions, dispense rumors and fake news that agree with their point of view. You can read more in the Pew Research Paper on "The Future of Truth and Misinformation."
The Malicious Use of Deepfakes
The malicious use of deepfakes can also cause serious harm to individuals, as well as to our social and democratic systems. Deepfakes may be misused to commit fraud, extortion, bullying and intimidation, as well as to falsify evidence, manipulate public debates and destabilize political processes. The escalation of deepfake video technology has fueled a reckoning with police violence in the U.S. as recorded by bystanders and police body-cameras. Less than two years ago, when the public watched a video recording of an event such as an incident of police brutality, we generally trusted that the event happened as shown in the video. Then again, with machine learning technology creating deepfake videos we may not see what factually occurred. As these deepfakes cause society to move away from “seeing is believing” that shift will negatively impact individuals whose stories society is already unlikely to believe. With these compelling deepfakes, the burden of proof to verify authenticity of videos may shift onto the videographer, a development that would further undermine attempts to seek justice for police violence. To counter deepfakes, high-tech tools are being developed to increase confidence in videos, but these technologies, although well-intentioned, could eventually be used to discredit already marginalized voices.
Hi Carole, Thank you for a thought invoking blog. What really caught my eye was this statement: "New technologies have always presented some level of threat, either real or imaginary. In recent years, fake content has fueled the virility of biased inaccuracy to the extent that it has contributed directly to everything from measles outbreaks to market manipulation in crypto currencies, the rise of the alt-right, and the mainstreaming of conspiracy theories. Most significantly, the democratic outcomes in the 2016 US Presidential Elections and the Brexit Referendum in the United Kingdom. Our society and the opinions we hold are increasingly affected and even shaped by anonymous malicious actors who seek results or actions that may not be in our best interests. This is a distinctly contemporary threat."
ReplyDeleteI was forced to think of technological advances that have helped shape our world such as technology in machinery that helped create mass production. With this came mass consumption and the cycle began. Women were introduced into the work place and families had more spending money. The cycle continues today. Mass production, mass consumption. I also thought of the technology behind Oppenheimer and the Atomic Bomb. My mind also wanders to the advancements in medical technology, xrays, ultrasounds, and laparoscopic surgeries. Personally I think about our own ability to take this class. As a society, we are completely dependent on technology, the good, bad, and evil. But we also have the technology to counter balance against those whose intentions are not pure.