Saturday, May 28, 2022

A Photo Finish

Gabriel T

gt824519

A Photo Finish

Photography and film are rare, immense innovations in the presentation of images. They are able to portray scenes, moments in history, or scenes from a day in the life, in great detail. Yet as such they are not always the full picture. While film and cameras can share a moment in time, it is rather the context surrounding these photos that can also be key when presenting events. What happens around the picture, or around the film, can be just an integral in telling the whole story.

An example that’s not quite ‘news” related, though is in the same ballpark, is early photos from the 18 and 1900’s. Often characterized as stoic and never smiling, often giving off the appearance of a somber mood. It’s interesting when you read more into it though, as it illuminates a whole other truth that could be possible.

It’s said that there are many reasons why people in early photographs weren’t smiling, not that the time period was particularly dreary. One potential reason suggests that it is partially because of the technology at the time, and that to take a photo that was sharp and unblurry, one had to sit very still to let enough light to come into the camera to get clear exposure. There were even posing stands created for holding positions to help people who were sitting for photos remain relatively unmoving.

Library of Congress/ LC-USZ62-19393


Though, around the same period, there are actually a lot of photos where people were smiling! Even a whole Flicker gallery is dedicated to preserving these smiling photos. In this particular one for instance, taken just moments apart, a couple is seen sitting still, and then further on, laughing. Though if the former photos are used more prevalently than the latter, it can give off a different impression. It’s a misnomer, it’s a distortion of the truth potentially.

www.vintag.es: Couple smiling in photo series


This shows how the background and context of an image can portray a very different picture altogether! :) When photos come about in news stories, or galleries of photojournalism, it can be very important to describe, show, or otherwise mention the context that the photo was taken in, and any surrounding info that may give more clarity to the time and place.

As The Washington Posts Video Checker guide states, altering a video can potentially change the narrative. Which can even extend to not only the video (or photo) itself, but also the image chosen for a story as well. For instance, I’ve sometime seen news stories about say, a political figure, and it will have a certain photo, and then later, I’ll see the story again from the same site with a different photo. This could be possibly testing the story with different cover photos to see how they are responded too. Even changing the tone of a story. Choosing a picture of a candidate or figure looking angry might make it seem as though they are being shown in one light, while a photo where they are smiling, may make it seem like the publication agrees with what they are doing.

The miscontextualization of photos or videos can make something seem different than it is, even though the images themselves may be clear as day. In these cases, it can be important to look at the context of the photo, find several sources about this particular time or event, and consider how one photo fits in to the larger picture.

Tony Zimmerman

tz996419@ohio.edu

Human beings are born with an innate ability to recognize faces. Newborns recognize faces as early as six days old, and at the ripe old age of four months, infants can recognize faces almost as well as adults! With this ability, it seems likely that we would be able to distinguish what is a natural face and what is fake. After all, deepfakes are centered around faces and faking what people say, but the research shows that we are not as safe from falling for manufactured deepfakes as we would hope. One study from MIT found that people could detect deepfakes at around 70%, and an algorithm caught about 80% of fakes. One silver lining in the depressing reality of our susceptibility to deepfakes is that we are much better at recognizing fake videos of famous people or people that we are familiar with. So, when someone like President Volodymyr Zelenskyy of Ukraine puts out a video telling his armed forces to surrender to Russia, we can tell pretty quickly that it is a deepfake. Although people who are not famous are more susceptible to someone making a deepfake video that destroys their reputation, even if the video is debunked, they cannot repair the damage done. Where this type of video is successful, even when debunked, is in making people doubt the news they read. It may also lead to a feeling of general apathy in public in not being able to trust anything. This erosion of trust can benefit people trying to mislead the public or who want to be the only source of trusted information for their followers/fans. The urgency of dealing with these deepfakes is only growing, with over 15,000 deepfake videos reported in 2019, expecting that the number will continue growing. 

https://www.bbc.com/news/technology-60780142


It is such a concern for Meta (Formerly Facebook) that in 2020 they launched a competition to develop an AI program that could detect deepfakes automatically. The prize for this competition was one million dollars. It would also seem logical that an AI program should be able to be developed to detect deepfakes. After all, many deepfake videos are made using AI programs themselves. That is not the case in reality, though. The very best AI program in Meta’s competition was only able to achieve a success rate of 65%. Combining our experience detecting faces and an AI algorithm increases the detection of deepfakes beyond what is achieved by either individually. This combination is still not perfect, and sometimes a false AI reading can make us change our interpretation from correct to false. As a result, we need to check multiple sources and remain vigilant in hunting for real news. 


Seeing no longer coensides with Believing

 Olivia Orf

oo373220@ohio.edu


Throughout my 22 years of living to this point, I have always been told "to see is to believe" and have heard the phrase "I'll believe it when I see it" more frequently than I could count in 10 lifetimes.



 Photo courtesy of memegenerator

However, it seems that more and more often we find ourselves consuming information that we have neither seen nor believed in the past... and neglecting to question that. 

Deepfakes are an extraordinary example of this — we may have seen or heard of a CEO speaking at that conference in town, a keynote speaker's now viral message from a college graduation or a televised presidential debate. 

All of these have one thing in common that lead me to ask a question — would you be able to determine if it was truly that person speaking in a recorded video of said speech unless you yourself witnessed the speech in person?

As tempting as it might be to say,"yes, absolutely!" the answer is no. The number of Deepfakes we encounter on a daily basis is becoming normalized at an increasing rate. 

This means that each person will have to, once again, realize that they must do their own research if they truly want unbiased answers. On top of that how much deeper we will have to dig to find accurate information, discerned only by reputable resources, journalists and news mediums. 

   Photo courtesy of The Journalists Resource

A 2017 article from The Journalists Resource speaks on fake news in multiple aspects and avenues and addresses how we as consumers play a role in spreading incorrect information, whether it is intentional or non. 

These deepfakes specifically, in a world full of fake news, possess the power to completely degrade the already waning faith consumers have in digital imaging, news, and social media. 


Photo courtesy of Pew Research Center

Social media leads us to Facebook, a giant of the social world.

23% of adults say they have shared fake news in some form, whether they knew it at the time or not. That is an overwhelming number for the amount of time we've been facing the next generation of fake news-ers and the technology they bring to the discussion. 

Issues like deepfakes, fake news, and video/photo editing and altering are no thing of the past — there are now bigger, better ways to alter or create variations, "practical jokes" if you will, like these and to share them with the masses on the world wide web. 

Learning to discern sources, utilize resources, and to engage in challenging discussions will teach us all to listen a little bit more, argue a little less, and be able to recognize things that are too good to be true, or too insane to seem rational. 


Deepfakes in Pornography

 Regan Morello

Alert.reganmorello@gmail.com

With all of the new and fascinating technology emerging in our world today, it is important that we discern moral and immoral ways to use this technology. Although the term "Deepfake" has only been around since 2017 when a popular reddit page coined the term, some forms of deepfake have been around since the 1990's. Deepfakes are a type of technology that alter appearance by replacing a face on a body with someone else's, most of the time in video format. This technology has been used to produce a multitude of creations, some good, and some extremely concerning. 

Deepfake pornography began appearing in 2017 when the redditor who coined the term began releasing pornography he altered using deepfake technology to replace the faces of porn actresses with popular female celebrities without their consent. In fact, Jennifer Walter stated in her article for Discover Magazine in 2020 that over 96% of deepfakes today are pornographic. This rise in nonconsensual use of famous women's likeness is concerning in itself, but what about the rise it has created in revenge porn? Women who aren't celebrities are just as much a victim of this, if not more. Revenge porn is when someone uses your likeness to create fake nude images of others and posts them online, ruining women's reputations and causing them unwarranted distress and embarrassment. Some people have no idea they have even been posted online, it could be me, you, or anyone you know. Stressful to think about right? So, what are some potential solutions?

Deepfake - Before and After

First of all, most obviously, do not indulge in the creation, viewing, or spread of these fake pornographic images and videos no matter what. Second of all, train yourself on how to spot it and it will be easier to report it. Deepfakes can be spotted in similar ways to the average photoshop fails and mishaps. Mit Media Lab provides an extensive list of ways deepfakes can be outed as fake. In short, they recommend paying attention to details in the face of the potential deepfake, such as eyebrows, moles, or wrinkles that may seem off. 

Though one of the majorly immoral ways deepfake technology can be used is in the porn industry, there are many positive ways deepfake technology can be applied as well. For example, Kendrick Lamar used deepfake technology in a highly creative way in his new music video "The Heart Part 5." In this video, Kendrick swaps his face with various famous men. Kendrick Lamar uses celebrities that have been highlighted men in the black community whether for good or bad reasons. His use of deepfakes in a creative way positively reenforces the urge to use deepfake technology in ways that benefit the world. 

The Future of Truth and Misinformation

Carole Lyn Zeleny
cz812071@ohio.edu



Image by Atelier


The Future of Truth and Misinformation

Since 2016, the digital battlefield has become more complex and widespread around the world. False information about major events from the Covid-19 outbreak to the 2020 US election threatens public health and safety. False narratives overcome factual ones and spur beliefs and actions that have come increasingly violent. Pundits are divided on whether the next decade will see a drop in false and misleading narratives online. Those who predict improvement put their hopes in technological and social solutions. Others believe that the dark side of human nature is more than stifled by technology. 



Image by: The Guardian



The Technologies that are Freeing Us are also Caging Us


New technologies have always presented some level of threat, either real or imaginary. In recent years, fake content has fueled the virality of biased inaccuracy to the extent that it has contributed directly to everything from measles outbreaks to market manipulation in crypto currencies, the rise of the alt-right, and the mainstreaming of conspiracy theories. Most significantly, the democratic outcomes in the 2016 US Presidential Elections and the Brexit Referendum in the United Kingdom. Our society and the opinions we hold are increasingly affected and even shaped by anonymous malicious actors who seek results or actions that may not be in our best interests. This is a distinctly contemporary threat.




Image by: The Guardian



Fake videos can now be created using a machine learning technique called a “generative adversarial network”, or a GAN. A graduate student introduced GANs in 2014 to algorithmically generate new data from existing data sets. In fact, a GAN scans thousands of photos in order to produce a new photo that slightly resembles the originals, creating a new photo that was never taken in the first place. GANs can also be used to generate new audio from existing audio, or new text from existing text. This machine learning technique was mostly limited to the AI research community until late 2017, when a Reddit user began posting digitally altered pornographic videos, he was using Google’s free open-source machine learning software, to superimpose celebrities’ faces on the bodies of women in pornographic movies. You can read more in this article posted by the Guardian.  





Image by: Istockphoto



No Market for the Truth


It comes down to motivation. There is currently not a market for the truth. The public isn’t motivated to seek out verified, vetted information. They are happy hearing what confirms their views. People can gain more creating fake information (both monetary and in notoriety) than they can keeping it from occurring. Avid users of social media systems like Facebook and Instagram are progressively creating ‘echo chambers’ of those who think alike. Additionally, they unfriend those with different ideas and opinions, dispense rumors and fake news that agree with their point of view. You can read more in the Pew Research Paper on "The Future of Truth and Misinformation."




Image by YouTube

The Malicious Use of Deepfakes


The malicious use of deepfakes can also cause serious harm to individuals, as well as to our social and democratic systems. Deepfakes may be misused to commit fraud, extortion, bullying and intimidation, as well as to falsify evidence, manipulate public debates and destabilize political processes. The escalation of deepfake video technology has fueled a reckoning with police violence in the U.S. as recorded by bystanders and police body-cameras. Less than two years ago, when the public watched a video recording of an event such as an incident of police brutality, we generally trusted that the event happened as shown in the video. Then again, with machine learning technology creating deepfake videos we may not see what factually occurred. As these deepfakes cause society to move away from “seeing is believing” that shift will negatively impact individuals whose stories society is already unlikely to believe. With these compelling deepfakes, the burden of proof to verify authenticity of videos may shift onto the videographer, a development that would further undermine attempts to seek justice for police violence. To counter deepfakes, high-tech tools are being developed to increase confidence in videos, but these technologies, although well-intentioned, could eventually be used to discredit already marginalized voices. 






Image by: Malwarebytes


Video Killed the (Insert Facts Here)

 Elisabeth Warner

ew758821@ohio.edu

    In July 2015, an anti-abortion extremist group, Center for Medical Progress (CMP), released a series of videos they claimed showed Planned Parenthood trafficking in fetal tissue obtained from abortions performed at their clinics. The videos, which investigations later showed to be deceptively edited, contained footage recorded secretly of CMP founder David Dalieiden and other group members posing as representative from a fictional bio research company meeting with Planned Parenthood's medical research director, Dr. Deborah Nucatola. 

    The footage appears to show Nucatola discussing the illegal sale of fetal tissue for the profit of Planned Parenthood.

    More extensive footage released by CMP makes clear that Nucatola, rather than discussing selling the tissue, is describing reimbursement for expenses incurred donating the tissue to medical research, a service offered to patients at a few Planned Parenthood clinics. 

    These videos, though later discredited, were presented to the public by traditional news organizations as undisputed truth, the consequences of which extend to the present moment.

Image: House Committee on Oversight and Reform

The Power of the Pixel

    To be clear, there is no evidence that Planned Parenthood sold fetal tissue or profited from it in any way. Investigations initiated by state Republican leaders hoping to capitalize on the videos to support their anti-abortion policies in 12 states all concluded there was nothing to support the allegations made by CMP. There has never been proof of any kind that corroborated their claims. 

    There has been evidence that the videos first shared by outlets all over the country were edited in a way that did "not present a complete or accurate record of the events they purport to depict".

    In the longer footage CMP themselves released, shifting timestamps show the original video to be the product of recordings from multiple dates, manipulated to reflect their anti-abortion agenda. An analysis done by an independent transcription service who did not have contact with Planned Parenthood reported that the videos had "substantive omissions". So while it's fair to argue that experts aren't omniscient, it's evident that the videos are, at minimum, misleading.

    When these videos were initially reported on, however, the overwhelming narrative was only that they were irrevocably devastating to not only Planned Parenthood but to the future of reproductive rights. Mainstream, arguably left-leaning news organizations such as The New York Times and The Washington Post seemed to spend little time vetting the videos, instead focusing on the partisan spectacle that was already unfolding. 

    Despite the questionable origins of CMP, a 2-year-old organization run by Daleiden, a 26-year-old with a history of harassment against healthcare providers, there was no real questioning of the veracity of the videos. These news organizations preferenced these videos over a heavily regulated century-old medical institution whose services were utilized by millions of Americans simply because it didn't occur to them that what their eyes were telling them might not be true.

Image: Boldmatic

Consequences

    Because news organizations failed to perform the most basic of journalistic functions– vetting supposed evidence with a skeptical and thorough investigation– there were real world consequences that extended far beyond ideological back-and-forths between opposing sides. 

    To the surprise of no one, these videos were swiftly and predictably weaponized by conservative politicians and pundits in their long battle to restrict or eliminate reproductive health care.

    The Republican-led Senate introduced a bill to cut federal funding to Planned Parenthood, which provides birth control and preventive health services to millions of people in the United States (because of the 1976 Hyde Amendment, no federal dollars can go toward abortion; also, abortion only accounts for approximately 3% of Planned Parenthood's health care services)(this bill did not pass).

    The state legislature in Ohio voted to defund Planned Parenthood, citing the videos. Then-Governor John Kasich signed the bill into law in 2016 blocking funding for programs that included Healthy Moms, Healthy Babies, the largest infant mortality reduction program in the state. 

    Five months after CMP released the videos and the media circulated them without questioning their validity, a man named Robert Dear brought four rifles into the Colorado Springs Planned Parenthood and murdered two women and a police officer. As he was being arrested, he yelled about the "baby parts", and later told the police "he was upset that Planned Parenthood was performing abortions and selling baby parts".

    There was one reported death threat against abortion providers in 2014; in 2015, the year the videos were released, there were 94.

Drip, Drip

    To combat the damage done by the videos, Planned Parenthood has done a lot right. 

    They have been tirelessly vocal and transparent about the work they do and they public health risks that would result in their inability to do their work, pushing back in the media, on their own platforms, and in congressional hearings. The reports and investigations vindicating their denial of CMP's false allegations have been covered, if not as heavily as the videos initially were, and they continue to be supported by the public, with 62% saying they viewed Planned Parenthood favorably and oppose cutting their federal funding. 

    In 2016, Planned Parenthood sued CMP for defamation, and a federal jury awarded them $2.2 million in damages. 

    Meanwhile, Daleiden, CMP's founder and orchestrator of the videos, has been embattled in legal charges, and is currently facing multiple felonies for his role in creating the videos. 

    But to repair the destruction caused by a credulous media willing to run unexamined accusations is a bit like trying to salvage honey from a jar that's been smashed on the ground– you can get some back, but most of it is likely to be full of glass shards, making it both unusable and dangerous. And you get pretty sticky trying to sort it all out.

    By accepting content that was illegally obtained from an organization with a clear political agenda and no meaningful history of credibility because they were hypnotized by images from a video, news organizations betrayed their foundational principles of truth and accuracy above all else. The media traded its own integrity to promote content reasonably understood as explosive, further eroding political stability and putting the public at material risk.

    In a few weeks, the Supreme Court is likely to strike down established law that makes abortion a constitutional right. Texas is paying a bounty to those who are suspected of aiding a person trying to get an abortion. The gun violence seen in Colorado pales next to what we've seen this week, this month, and this year. 

    It would be an absurd overreach to say that the decision made by multiple news organizations to share CMP's video is responsible for all that. 

    But each drip that happens when respected news organizations neglect to treat any potential source, story, or evidence with the suspicion and caution it deserves adds up, and we never know which drip will make our collective bucket overflow. With the CMP videos, the news media went image-blind, seduced by what they thought was right in front of them, adding not a few drops to that bucket. 

    To my knowledge, no major news outlets have acknowledged that their initial reporting was sloppy. More drips. 


Recognizing fakes

 It is harder than ever to trust that what we see or hear is actual real and factual information. With so much technology and tools for manipulating audio, video, and text, it's a wonder that we ever take in real info. We go online today at our own risk. We never know what we're in for when logging onto our social media accounts for example.

On Instagram I see many videos that fall under the category of having been manipulated or flat out fake just about every day. Sometimes it's even form supposedly reputable news and information websites. Plenty of accounts on Instagram manipulate video from other sources to make them funnier or more dramatic.  It's funny when I see a video of an interview with an athlete from years prior where someone posts it and removes part of it or chops up the audio so what someone says it out of context. Like others, I'll immediately call that page/account out and post that the video in question is fake. 

There has been a couple of times when I've had friends who's pictures have been stolen online and someone will clone their social media account and pretend to be them. This is scary and is basically like identity theft. I've actually seen this happen and if I come across the fake account I'll report it to that social media platform. It's scary to know that anyone can take your property whether it be images, audio, or video and post them for their amusement and pleasure.

                                         Fake election news stories go viral and outperform real news                                      Image: Buzzfeed news

Knowing what to look and listen for when it comes to fake news or video online is the first step in stopping false information from spreading across the web. There's an old saying that, it takes a village to raise a child. We should all take the same approach to ridding the internet of fake news stories, videos and audio. Sometimes I even have to do a double take on certain videos online or news stories especially if I'm scrolling through my timeline. Sometimes stuff pops up and I'm on autopilot. So, always being aware and awake at the wheel is also key.

This phenomenon of fake news and videos has risen to a level of epic proportion. Some platforms are worst than others and we all must be careful. I've seen multiple times where reputable reporters get duped by a fake story and they go ahead with it in an attempt to break the news first, only to find out the tip was fake. Then comes a retraction which no journalist ever wants to be called upon to do.

Due Diligence

Shannon Limbach

SL668021@Ohio.edu


To See or To Read

In a series of detailed polling questions regarding trust in the media, media news sources, and trust in media personalities by YouGov, Trust in Media 2022: Where America Gets Their News and Who They Trust For Information, it is simple to summarize that most individuals lean toward viewing their news over printed pages. While print news offers more in-depth information, broadcast television news, streaming cable, and social media come in as the top three news source preferences. 


YouGov

A Brief History

There are historical patterns that show the evolution toward viewership. Prior to 1920, news print was our main source of news information. Along came radio, which ignited an information war between the Press and Radio, aptly called the Press-Radio War.   Radio kept its advancement and by 1933 President Franklin D. Roosevelt began his famous radio series, Fireside Chats.  But the Golden Age of Radio was soon overtaken by Television. Fast Forward 🚀 to June 1, 1980, Ted Turner's Cable News Network 24 Hour News Cycle. 


The First Hour of CNN June 1, 1980


Mass Media

CNN was the birth of 24 hour Mass Media and Americans were hooked on the news and the visual stimulation. Music Television followed suit 14 months later on August 1, 1981. Un-ironically, MTV's chose its debut song, Video Killed the Radio Star by The Buggles. MTV permanently changed the way we listen and view music. 


The First Video MTV August 1, 1981

Present Day

As consumers, we have a cornucopia of visual news and information at our disposal: YouTube, streaming news services, TikTok, Twitter, Facebook, Instagram Reels, Snapchat, etc.... One need these services have in common is our viewership as they compete for our attention. Why? Advertising revenue is the first component but there are also nefarious reasons, influence. 

With a non-stop visual barrage, how do we, as consumers and budding journalists, decipher truth from fiction? It's not easy. We can all take a lesson from Daniel Dale. Dale is a Canadian journalist and Fact Checker leader in the industry. He first began his career in Toronto, Canada, fact checking politicians for the Toronto Star. In 2015, Dale became Toronto Star's Washington Corespondent and is now a contributor for CNN. He was exceptional in his rapid response fact checking the United States former President Donald Trump at the Republican National Convention. 




Multi-Step Fake 

The above video, introduced by Daniel Dale, shows the novice eye ways in which videos can be altered, misused, and still appear legitimate. A German reporter, Marvin Bergauer, was reporting on Austria's Climate Demonstration. The creator of the altered video took the original video and reintroduced the video in a different context. What originally was a demonstration soon turned into a video of the media trying to fool the public by claiming the people in bodybags were dead Ukrainians. The Fake took many steps to appear legitimate. An English speaking reporter's voice, CNN's Cal Perry, was dubbed over the original reporter who was speaking German. (This was simple because the Austrian reporter was wearing a mask.) The creator removed the old banner* and replaced it with a legitimately looking news head line**. 

Original*
   

Fake*



The purpose of this Fake is to embarrass and delegitimize mass media. Another purpose is to influence the audience and create distrust for news reporters and programing regarding the Ukrainian invasion. 


News Travels Fast

The average viewer sees up to 10,000 ad images a day. Active social media users are exposed to media ads, video, and pictures regularly. It can be very difficult to identify the good from the bad. We also have bias that makes us willingly accept something as truth if it identifies with our ideology. However, we do have an ethical obligation to call out misbehavior regardless. 

Keys to reporting on false images or videos remain constant. First is to guarantee the perceived information is truly false. 


1. Find the Source
2. Review the Source
3. Do your Research

Confirm the video matches the area the event is taking place. 

1. Landmarks
2. Sketch Area
3. Google Maps
4. Confirm

When was the video filmed. 

1. Giveaways
2. Lighting
3. Weather 
4. Other Sources

Second step, according to Veronica Kwan, is to report altered images and manipulated videos; but there is a thin line on how to achieve success without inadvertently advancing Fake information. We as consumers also have an obligation to ensure we are not passing along discredited videos and altered images. Once a video or pictures has been confirmed to be altered, First Draft provides important advice on how or when to cover manipulated information. 



Social media, twenty-four news channels, abundant options to view content did not create disinformation. It has been around since the time man/woman could speak. What has changed are the multiple avenues to spread disinformation, the technology to perfect it, and the lightning speed it travels. So I want to leave you with this familiar phrase inaccurately and ironically credited to Mark Twain. 

 A lie travels around the globe while the truth is putting on its boots. 


Is a picture still worth a thousand words?

Lori Stem

loristemou@gmail.com (ls603219@ohio.edu)

Credit: Craftypuzzles.com

"Pictures don't lie."

"A picture is worth a thousand words."

"Photography is truth."

"Seeing is believing."

These are all phrases we've grown up with. Well, many of us that didn't grow up with technology at our fingertips anyway. These phrases have been etched into our minds for a long time.

Too bad they are the farthest from the truth in many cases now. Editing software is extremely accessible and anyone can master the skill of manipulating images. It's quite easy to edit an image to tell a narrative of interest to you or others, post, and watch the shares and likes multiply.

It's scary these false images are taken for the truth far too often. There are consumers who don't have the critical thinking abilities to recognize these images are not the truth (or don't care). And some might lack the tech skills to even know how prevalent editing capabilities are. 

Likewise, there are some journalists not vetting sources before using and sharing images. They are rushing to post the next hot story before their competition does. Reputable news sources would be more unlikely to do this.

So, how do you know an image you see on social media is manipulated? This BBC article gives some easy-to-spot clues, or watch this short video to learn how.

Why does it matter if manipulated images are shared?

Well for one, journalists' use of manipulated images is unethical and misleading. Reputable journalists adhere to high standards by informing citizens of events using facts and vetting their sources of information. 

Secondly, this article from The Arthur W. Page Center at Penn State says consumers seeing manipulated images create false connections. And we know false connections lead to believing narratives that are simply not true. 

Sometimes, people just don't care

...and this is a major problem. Take my two sisters for instance. They couldn't be more opposite on so many levels.

My younger sister is an interactive media specialist for a conservation non-profit. Much of her career has been time spent creating videos or taking photos for work. 

She understands the ins and outs of editing software. This has given her an edge when it comes to easily spotting manipulated images.

My older sister is not tech-savvy. Well, she uses social media, but she lacks experience and knowledge of current tech capabilities.

Unfortunately, she also has always had feelings of low self-worth and self-esteem. Her career is one where it gives her a feeling of power over others. 

To her, the news is what she sees scrolling on Facebook. Almost daily, she likes, comments on, and shares fake stories, images, and videos.

When her spouse points out she is sharing fake stories and images she shrugs her shoulders and goes about her merry way. 

In her case, she craves feeling like part of an online community and bolstering her beliefs and others that are like-minded. She wants to keep what is her social circle intact.

And what is the way she does this? Forming false connections to false narratives by believing in and sharing manipulated images. 

Check your sources, check your sources, check your sources

To combat the spreading of misleading and manipulated images, please be responsible and check your sources before liking or sharing. Look at clues photos give, use reverse image search to check out images, and be sure the content is from reputable sources and journalists.

And remember...there are many types of misinformation. Manipulated images are just one of them. 

Credit: Groundviews.org

Leave a comment and tell me about a time you were not sure if an image was manipulated. Thanks!

Not Everything Meets the Eye

 Holly Friedel 

hf004717@ohio.edu

Visual Images Lie

We like to think that when we see something with our own eyes, it's the truth. I know countless times I have looked at online drama between influencers, and because of one photo I am fully convinced that what I perceived in the image is what was happening at that time. Honestly, a good way to look at it too is looking at a family photo a mom posted on Facebook. Sure, everyone is smiling; little do you know they were all screaming at each other five seconds before the camera flashed. And I know, most of us are do not care about what was happening behind the scenes of a family picture, but because of high tech editing, artificial intelligence and more, the things that are important to us are falsified. 

It is not just photos anymore that are insanely edited, an entire video can be changed and spread like wildfire. A perfect example is when the country said catholic school kids who wore their MAGA hats in front of Black Hebrew Israelis were racist. Later we find from an unedited clip, that the kids were trying to diffuse the harassment the Hebrew group was pouring onto bystanders, and ultimately it was all a large misunderstanding. It makes one look towards Film Theory and the idea of form versus content. Ultimately, it focuses on how the meaning of moving images in film connect to what is inside the footage, when in reality the way the moving images were selected and edited alters how it is perceived.  

That being said, I just have learned about deep fakes. What are they? Basically, deep fakes are videos where face, body and voice can be altered so they appear to be someone else. Usually deep fakes are used to spread false information or put someone in a negative light. In this youtube video, Deep Fakes, we can see how easily it can be done. And how a lot of times, it is done with a political agenda. In the video, we see just how far it can go when they present how flowers in bloom and weather changes can be done with the software that creates deep fakes. Honestly, its scary. It makes me question any viral video that has aggressive views, and now, I may even sadly question a beautiful video of flowers in bloom. 

Like I said previously, no one thinks something is falsified if they view with their own eyes. In a study by Penn State, 58% of people who viewed fake news video on their phone, believe it to be true. Versus 33% of the audience finding the story credible after reading the article. I think taking the time to actually dive into how to identify fake news made me sad and disappointed, and at the end of the day make me wish technology did not exist. Artificial intelligence even creates fake faces for companies to use as models and increase diversity, when sadly it is not a real person. So either do not take context of photos and videos too literally, or completely investigate what you are looking at if is something important to you. 

                                                                         Credit: Insider 

Is Everything What It Truly Seems Like?

 Aaron Liles

al508219@ohio.edu


Source: Washington Post

Photo Manipulation And Dangers Of Its Use

Photo manipulation has existed for a very long time, however, with the rise of technological software such as Photoshop, almost anyone can have access to photo manipulation tools. Now, photo manipulation is not inherently bad. For example, someone taking a group selfie with their family might want to change the background of the photos they took, so they manipulate it to a background they do like. That in and of itself is not harmful. 

However, there is potentially a lot of harm that can be caused by using photo manipulation for unethical reasons.  In an article posted by the GCF (Goodwill Community Foundation) they make a great point about how journalists often make small changes such as a lighting adjustment to a photo, but it is unethical to change a photo that can mislead the public. Look at the image above for an example of this, on the right is a real photo of Ice Cube and 50 Cent wearing a Big 3 Basketball hat and New York Yankees baseball hat respectively. On the left is a manipulated photo of them wearing Trump 2020 hats. This is where danger can come in, as with the rise of social media, the manipulated photo can spread fast throughout the internet and can cause people to be misled by what is in the photo. 

The Rise Of Deepfakes 

Deepfakes are another manipulation tool that has seen a sharp uptick in usage in recent years. In essence a deepfake is an AI generated tool that can manipulate audio and video. Like photo manipulation, this is not always used for harmful intentions, but the majority of the time it is. The ORF (Observer Research Foundation) states that using deepfakes can create many intentional or unintentional consequences to not just individuals, but society at large as well.  

Source: The Guardian

In the GIF above, Jimmy Fallon is dressed up as Donald Trump, and on the right is a deepfake alteration to make Fallon look exactly like Trump. This shows just how real deepfakes can look, and that they can look like the real thing. This can easily cause people to be misled and lead to consequences.  

The Future Of Photo Manipulation And Deepfakes

Photo manipulation and deepfakes are here to stay in society, so it is important to know how to spot them in order to combat the spread of fake news. This also showcases how critical it is to double check sources when fact-checking information, as it can be easy to fall prey to a really effective photo manipulation or a deepfake.