bm179214@ohio.edu
With the rise in political tensions over the past few years, we have also seen a surge in social media communities that rally around radical ideas. Between white supremacists and neo-Nazis, there is a fair share of hated online.
Social media platforms are characterized by their ability to build communities. This can be beneficial for people looking for support and motivation not available in their own proximity, but it also creates a stronger level of accessibility to radicalized or even violent communities.
With these divides becoming more prominent and controversy grasping our nation, we begin to question whether or not social media platforms are obligated so monitor and regulate such communities. Are social networks to remain and open forum for communication or are they censored? Are providers obligated to allow all sides to share their perspectives? Do the principles of free speech apply to social media?
There are so many questions that come to mind when thinking of these communities, so the first place to start is at the root of platform guidelines.
Graphic via https://www.dw.com/en/hate-speech-curb-should-look-beyond-facebook-twitter/a-39550114 |
Twitter's mission is to be a community where everyone can share diverse thoughts and perspectives, but their community standards make their code of conduct very clear in terms of hate speech and violence. While a wide range of opinions are encouraged, Twitter draws it's line when users promote violence and use hateful imagery as profiles pictures or display names. In addition, Twitter bans the use of targeted slurs, tropes and intentional misgendering of transgender individuals.
According to Facebook's community standards, they do not allow hate speech because "it creates an environment of intimidation and exclusion and in some cases may promote real-world violence." In recent months, Facebook has showed a renewed commitment to protecting their users and communities from fake news. Facebook is working to get rid of accounts that create and promote fake and sometimes even hateful content.
Similarly to the previous platforms, Instagram's community guidelines place a strong stance on hate speech, but they do have a clause saying they allow "stronger conversations" for people in the news. However, personal attacks based on religion, race, sexual orientation, gender, disabilities or diseases are never allowed.
While it may seem like these platforms have a continued commitment to diversity and inclusion by banning hate speech, the question arises on whether or not they are actually doing a good job at this. For example, 11,696 pieces of Instagram content contained anti-Semitic sentiments following the shooting at a Pittsburgh synagogue.
Do you think social media platforms are following their own guidelines? Or do they not have enough man-power to monitor all of the content being constantly distributed throughout the world?
No comments:
Post a Comment