The recent deadly racist attack in Buffalo, New York, planned with the tactical advice of online discussion groups, is prompting calls in Canada and beyond for better monitoring of internet content. But civil liberties activists say it is logistically difficult to effectively cleanse the network of hateful or violent material.
The Tops supermarket massacre left 10 people dead and three injured. Officials believe the attack was a racially motivated hate crime.
An online cache of disturbing posts suggests the suspected Buffalo shooter was seeking advice from like-minded people on moderated online chat rooms.
The shooting has once again raised questions about how effectively social media platforms can respond to threatening content while safeguarding freedom of expression online.
The alleged Buffalo shooter also discussed the details of the attack’s launch on the Discord online platform. It allows users to create private invite-only channels, but the site also hosts public channels that anyone can join.
“What kind of bullets will defeat bulletproof vests?”
On Discord, the suspect posted a two-year-old diary detailing a racist manifesto heavily inspired by the author’s Christchurch manifesto. There were also detailed plans to execute an attack.
The suspected Buffalo shooter posted messages asking for advice on tactical gear, like body armor and armour, what weapon to use and where to access certain bullets. “Is there a Discord that mainly talks about tactical gear?” an August 2020 article reads. “And what kind of bullets will defeat bulletproof vests?
Alternative media unicorn riot discovered web posts apparently linked to the Buffalo suspect and shared the content with CBC News. The network does not repost the most disturbing and racist material contained in the posts.
‘Canada is not immune,’ say leading black voices in response to Buffalo mass shooting
The ‘great replacement’ plot unified white supremacists long before Buffalo, NY, fired
In addition to asking for specific advice on conducting a mass shooting, the suspect live-streamed the attack on Twitch, an Amazon-owned platform often used to stream video games. Twitch deleted the video within two minutes of the violence starting.
But the post was re-uploaded online, going viral on platforms like Facebook and Twitter.
Amarnath Amarasingam, a professor at Queen’s University and an expert on extremism and online communities, said log entries uploaded by the suspect reveal that Discord flagged one of his posts when he attempted to upload the Christchurch shooter’s manifesto, but the platform has done nothing to follow up. .
“If they had even bothered to look at his diary, it would have been immediately clear that he was planning an attack because he said so directly and openly from the very beginning,” Amarasingam said.
“In the long list of red flags that have been missed, you can also add this one.”
“Hate has no place on Discord”
In an email to CBC News, Discord provided a response to the attack. “Our deepest condolences go out to the victims and their families,” a company spokesperson wrote. “Hate has no place on Discord and we are committed to fighting violence and extremism.”
Discord said that, to the best of their knowledge, the alleged shooter maintains “a private, invite-only server…to serve as a personal chat log.” But about 30 minutes before the attack, “a small group of people were invited and joined the server.”
Efficiently and quickly moderating this kind of content is not easy. Last year, the Liberals proposed a bill that has been criticized for failing to strike the right balance between privacy rights and online safety.
“Regulations must be thoughtful and nuanced, recognizing how vital free speech is to a democratic society,” Cara Zwibel of the Canadian Civil Liberties Association said in a statement to CBC. “A government that believes it can eradicate online hate or clean up the internet by imposing strict takedown requirements on platforms is engaged in a losing battle,”
“Governments should work to require platforms to be more transparent about how they address these issues and in particular the tools and methods they use to amplify, promote and monetize certain types of expression. online,” Zwibel said.
WATCH: Black Canadians react to mass shooting in Buffalo:
During the 2021 federal election campaign, the Liberals promised to introduce new legislation within the first 100 days of their mandate “to address serious forms of harmful online content, particularly hate speech, terrorist content, content that incites violence, child sexual exploitation material and the non-consensual distribution of intimate images.”
They pledged to “ensure that social media platforms and other online services are held accountable for the content they host”. The move was partly in response to the 2017 hate-motivated attack on a mosque in Quebec City and the deadly van attack in London, Ontario in June 2021.
Canadians among the most active in right-wing extremism online, study finds
While the government missed the 100-day mark in early February, it has since set up a panel of experts to make recommendations to Heritage Minister Pablo Rodriguez. Their findings will inform policies regulating social media platforms.
“What happens online doesn’t stay online,” Rodriguez said. “Online violence is real violence and we need to tackle it.”
Amarasingam is part of this group of experts.
“All of this has to come under some sort of legislation that forces some of these platforms to think about the risks inherent in their service so they can think about how to prevent them,” Amarasingam said.
New Zealand’s response
New Zealand faced a similar challenge in 2019 when the Christchurch marksman live-streamed his attack and posted his manifesto online. Authorities took steps in 2019 to ban the video to the public. The country’s chief censor also classified Buffalo’s video, newspaper and manifesto as “objectionable” because the attack was inspired by those in Christchurch, creating more trauma for people there.
Academics and others can request an exemption to use prohibited content in limited contexts for research purposes.
Rupert Ablett-Hampson, New Zealand’s acting chief censor, said removing content like that posted by the Christchurch shooter did not completely stop the spread of racist manifestos or misinformation.
“What we can’t categorize is the underlying misinformation and hatred…that is ultimately behind these actions,” Ablett-Hampson said.
“We really need to look to tech companies to be able to take responsible action when there is misinformation online.”