New Zealand shooting shows how YouTube and Facebook spread hate and violent images — yet again – Chicago Tribune
Friday’s slaughter in two New Zealand mosques played out as a dystopian reality show delivered by some of America’s biggest technology companies. YouTube, Facebook, Reddit and Twitter all had roles in publicizing the violence and, by extension, the hate-filled ideology behind it.
These companies – some of the richest, most technologically advanced in the world – failed to rapidly quell the spread of troubling content as it metastasized across platforms, bringing horrific images to internet users worldwide.
The alleged shooter, a heavily armed man authorities have not yet named, also released a 74-page manifesto denouncing Muslims and immigrants that spread widely online. He left behind a social media trail on Twitter and Facebook that amounted to footnotes to his manifesto. Over the two days before the shooting he posted about 60 of the same links across different platforms, nearly half of which were to YouTube videos that were still active late Friday.
The horror began Friday morning in New Zealand, as the alleged shooter used Facebook to live-stream his assault on Al Noor Mosque, one of two Christchurch mosques that he attacked and the scene of most of the 49 fatalities. Many hours later – long after the man and other suspects had been arrested – some internet users still were uploading and re-uploading the video to YouTube and other online services. A search of keywords related to the event, such as “New Zealand,” surfaced a long list of videos, many of which were lengthy and uncensored views of the massacre.
The almost instantaneous spread of online images from Friday’s shooting underscored how deeply entwined social media platforms have become, with savvy users moving content back and forth across platforms faster than the platforms themselves can react. It also was a reminder of the repeated inability of YouTube, the world’s biggest video site, to detect and remove some types of violent content, even though it has for years automatically flagged nudity and copyrighted music.
“The rapid and wide-scale dissemination of this hateful content – live-streamed on Facebook, uploaded on YouTube and amplified on Reddit – shows how easily the largest platforms can still be misused,” said Sen. Mark Warner, D-Va. “It is ever clearer that YouTube, in particular, has yet to grapple with the role it has played in facilitating radicalization and recruitment.”
Policy experts say U.S. regulators and Congress are ill-equipped to intervene in the problem, but technology companies have become uncharacteristically vulnerable in Washington. A growing number of policymakers from both parties have raised the prospect of new privacy laws, fines and even breakups of past tech mergers. How regulation would prevent the posting of such content was not immediately clear, however.
Public and political scrutiny is growing over YouTube in particular. It has sparked a succession of controversies in recent years for spreading hateful online conspiracies, violent terrorist recruiting videos and a wide range of inappropriate content reaching children, including suicide instructions spliced into kids’ videos.
Tech companies “have a content-moderation problem that is fundamentally beyond the scale that they know how to deal with,” said Becca Lewis, a researcher at Stanford University and the think tank Data & Society. “The financial incentives are in play to keep content first and monetization first.”
The New Zealand massacre video, which appeared to have been recorded with a GoPro helmet camera, was discussed even before it began on the fringe message board 8chan, an anonymous forum known for its politically extreme and often hateful commentary. Users on the site followed the attack in real time, cheering or expressing horror.
They also traded links to the alleged shooter’s hate-filled postings and to copies of his videos on various sites, while encouraging each other to download the clips before they were taken offline. By Friday afternoon, short clips of the footage had been edited to include footage of YouTube personalities superimposed as if they were live-streaming a video game.
The first-person shooting video spread particularly widely on YouTube as the people uploading it outraced website moderators’ ability to delete the clips. Some of the videos were named after quotes from the shooter, such as, “Let’s get this party started.”
YouTube tweeted Friday morning: “Our hearts are broken over today’s terrible tragedy in New Zealand. Please know we are working vigilantly to remove any violent footage.” Twitter said that the company suspended the account of one of the suspects and is working to remove the video from its network, both of which violate its policies.
On message boards such as Reddit, people posted links to the videos, which would then sometimes get deleted, only for others to post new links to alternative “mirror” sites, beginning the cycle anew.
Reddit, one of America’s most popular websites, on Friday banned forums named “gore” and “watchpeopledie,” where the videos had been reposted for users to narrate and comment on in real time. A moderator on the “watchpeopledie” forum had defended keeping the video online because it offered “unfiltered reality.” The 7-year-old forum had more than 300,000 subscribers as of the time of the New Zealand shooting.
Reddit said in a statement that it was “actively monitoring the situation in Christchurch, New Zealand. Any content containing links to the video stream are being removed in accordance with our site-wide policy.”
When a shooting video gets uploaded to social media sites, review teams often use that video to create a marked copy, known as a hash, that they can use to build an automatic blacklist for when it gets posted again. The years-old algorithmic technique, first popularized as a tactic to combat the spread of child pornography, has now been used to automatically flag copyrighted material, porn and other content that violates the social-media sites’ rules.
But such algorithms remain limited, experts say. Those uploading videos can sidestep the rules by altering the clips in small ways, such as attaching a watermark, distorting the music, or skewing the video’s size, editing or speed. Several of the shooting videos reposted to YouTube appeared to have those alterations, though it was unclear whether those changes contributed to their remaining online.