Fighting the Darkest Corners of the Web
New Zealand has worked hard to keep the name of the man charged with killing 51 Muslims in Christchurch, out of the news, and to restrict the spread online of the hateful ideology he is accused of promoting, Jamie Tarabay writes for a story in The New York Times. But the footage, games, memes and messages that still populate the dark corners of the global internet underline the immensity of the task, especially for a small country like New Zealand, Tarabay reports.
“The internet is a very complex and rough environment, and governments, especially small governments, don’t have as many cards as they would like to play,” said Ben Buchanan, a cybersecurity expert who teaches at Georgetown University.
Shortly after the 15 March attack, Prime Minister Jacinda Ardern declared that she would never utter the accused’s name, and that she would do whatever she could to deny him a platform for his views.
A few days later, the New Zealand government banned the sharing or viewing of a 74-page manifesto that the gunman is believed to have written. The country also declared it a crime to spread the video purporting to show the massacre; more than a dozen people have been officially warned or charged.
Ardern followed those actions with an effort, which she branded the Christchurch Call, to enlist tech companies like Facebook, Google, Twitter and YouTube to do more to curb violent and extremist content. In an op-ed, Ardern noted that her government could change gun laws and tackle racism and intelligence failures, but that “we can’t fix the proliferation of violent content online by ourselves”.
Seventeen countries and the European Commission, as well as eight large tech companies, have signed on to her call. And late June, leaders at the Group of 20 summit in Osaka, Japan, issued their own appeal to tech companies, declaring in a statement that “the rule of law applies online as it does offline”.
But if anything, the appetite for material connected to the Christchurch attack continues to grow, said Ben Decker, the chief executive of Memetica, a digital investigations consultancy.
Alex Stamos, a former chief security officer at Facebook and now the director of the Stanford Internet Observatory, said at the hearing of the accused that there were several steps that tech companies could take to address extreme content online, including being more transparent.
“While there is no single answer that will keep all parties happy, the platforms must do a much better job of elucidating their thinking processes and developing public criteria that bind them to the precedents they create with every decision,” Stamos said.
“There remain many kinds of speech that are objectionable to some in society, but not to the point where huge, democratically unaccountable corporations should completely prohibit such speech,” he added. “The decisions made in these gray areas create precedents that aim to serve public safety and democratic freedoms, but can also imperil both.”
Original article by Jamie Tarabay, The New York Times, July 5, 2019.
Photo by Adam Dean.