Oxygen Insider Exclusive!

Create a free profile to get unlimited access to exclusive videos, breaking news, sweepstakes, and more!

Sign Up for Free to View
Very Real

What The Racist GIF Debacle Teaches Us About Online Hate Speech

Every social media outlet has faced the inevitable question of how to deal with toxic and harmful content.

By Kari Sonde

Giphy made its return to both Snapchat and Instagram after its swift and unceremonious removal last month upon discovery of an explicitly racist GIF that depicted an increasing “N-word crime death” counter cranked by a gorilla.

According to The Verge, Giphy said that the GIF slipped through because of “a bug in our content moderation filters specifically affecting GIF stickers,” allowing it to bypass the platform’s rules that prohibit users from posting such content. Giphy fixed the bug, and Instagram and Snapchat added it back.

Though this incident was resolved when the public caught it, it’s notably different than past faux pas. Instagram and Snapchat pulled the app immediately, and publicly. There was no rationalizing, no invoking the First Amendment — just a firm, concise apology and correction.

Every social media outlet — from Facebook to Twitter to Reddit — has faced the inevitable question of how to deal with toxic and harmful content, mostly after public outcry. Facebook has been dealing with a slew of privacy issues of late, but has yet to contend with racism and hate speech on its platforms. Facebook recently implied that it was up to users to decide to condemn pedophilia via a survey, while quietly allowing advertisers to target “Jew haters” and other hate groups in America and in the world, according to recent findings by ProPublica. The New York Times reports that this policy even stoked tensions in Myanmar, where Facebook accepted advertisements attacking the minority Rohingya population there. In context, Facebook has banned women for talking about their personal experiences, according to Dazed.

Will Facebook do anything about hate speech on its platform? The appropriate artificial intelligence is not “there yet,” says CEO Mark Zuckerberg. He said it could take five to 10 years to fix the software that’s supposed to identify hate speech, according to his testimony before the U.S Congress April 10. But while Zuckerberg cites linguistic nuances as a reason for AI failure, Facebook also allegedly surpressed free speech by helping the Vietnamese government remove dissident content. And, as Slate explained last year, “five-to-10 years” tends to be the catchall phrase from tech-related organizations promising everything from eliminating chemotherapy to flying cars.

Facebook is facing uncomfortable questions right now, but it’s not the only one. Reddit, an arguably shadier part of the internet, let the New Yorker into its offices to watch the detoxification progress, consisting of a human staff combing through subreddits to determine how bad something has to be to shut it down. Reddit updated policies last year around the time it found violent misogynistic content in in the r/incels subreddit, according to Vice.

Twitter still must answer why users spewing neo-Nazi-affiliated content are allowed to remain on the platform, while other users reportedly find that standing up to abuse on the internet can get them banned instead of the abuser. Last year, Twitter started removing the verified check mark from popular users associated with alt-right movements, but white supremacists like Richard Spencer already used the app to spread violent messages. Now, Twitter’s looking into eliminating bot accounts that spread divisive misinformation.

The frustration of the public is palpable — and perhaps the only driving force behind any change. Unilever, one of the largest advertisers in the world, in February floated the possibility of scrubbing ads from Facebook if it doesn’t follow through on regulating hate speech. The Washington Post reported that the lack of federal regulations on the tech industry is pushing state governments to action. California, where most tech companies are headquartered, is drawing up legislation to tackle social media bots. Maryland, New York, and Washington are following close behind, pushing bills that make online political ads more transparent for voters after Facebook’s Cambridge Analytica scandal.

Though Instagram is owned by Facebook, it tackled the racist GIF issue swiftly. Snapchat’s stock has largely dropped as celebrities voiced complaints about it, like Kylie Jenner asking if anyone used it anymore and Rihanna dropping the app after an ad asked users if they would rather slap her or punch Chris Brown.

The GIF incident marks a moment of reckoning for social media platforms that have previously used the First Amendment as a shield. The speed of the resolution signals a shift in power, especially as social media users are deleting accounts and starting to monitor their own privacy settings. As people pull back from the internet, it’s important to watch how large tech and media companies grapple with their place in society and what users demand from them. They’ve made money off of us for far too long. It’s time for accountability.

[Photo: Getty Images]