Saturday, May 28, 2022

Tony Zimmerman

tz996419@ohio.edu

Human beings are born with an innate ability to recognize faces. Newborns recognize faces as early as six days old, and at the ripe old age of four months, infants can recognize faces almost as well as adults! With this ability, it seems likely that we would be able to distinguish what is a natural face and what is fake. After all, deepfakes are centered around faces and faking what people say, but the research shows that we are not as safe from falling for manufactured deepfakes as we would hope. One study from MIT found that people could detect deepfakes at around 70%, and an algorithm caught about 80% of fakes. One silver lining in the depressing reality of our susceptibility to deepfakes is that we are much better at recognizing fake videos of famous people or people that we are familiar with. So, when someone like President Volodymyr Zelenskyy of Ukraine puts out a video telling his armed forces to surrender to Russia, we can tell pretty quickly that it is a deepfake. Although people who are not famous are more susceptible to someone making a deepfake video that destroys their reputation, even if the video is debunked, they cannot repair the damage done. Where this type of video is successful, even when debunked, is in making people doubt the news they read. It may also lead to a feeling of general apathy in public in not being able to trust anything. This erosion of trust can benefit people trying to mislead the public or who want to be the only source of trusted information for their followers/fans. The urgency of dealing with these deepfakes is only growing, with over 15,000 deepfake videos reported in 2019, expecting that the number will continue growing. 

https://www.bbc.com/news/technology-60780142


It is such a concern for Meta (Formerly Facebook) that in 2020 they launched a competition to develop an AI program that could detect deepfakes automatically. The prize for this competition was one million dollars. It would also seem logical that an AI program should be able to be developed to detect deepfakes. After all, many deepfake videos are made using AI programs themselves. That is not the case in reality, though. The very best AI program in Meta’s competition was only able to achieve a success rate of 65%. Combining our experience detecting faces and an AI algorithm increases the detection of deepfakes beyond what is achieved by either individually. This combination is still not perfect, and sometimes a false AI reading can make us change our interpretation from correct to false. As a result, we need to check multiple sources and remain vigilant in hunting for real news. 


No comments:

Post a Comment