The Conversation logo

It didn’t take long for a terrorist to show how hard it is to prevent violent extremist content being shared online.

Within six months of the attacks at two Christchurch mosques on March 15 last year, which were live streamed on Facebook, a far-right terrorist’s attack at a German synagogue was broadcast live on Amazon’s video-streaming platform Twitch.

In an echo of the Christchurch attack, it was users who reported the video to Twitch, which was up for about half an hour before being removed.

Last year, New Zealand Prime Minister Jacinda Ardern was commended for her leadership on the Christchurch Call, bringing together governments and tech companies with the aim of eliminating terrorist and violent extremist content online.

A year on, the Christchurch Call is still an important initiative. But one of the biggest challenges we face is to prevent far-right groups from simply moving to “the dark web”.

The BFD.

Three missing nations in the Christchurch Call

Since launching at a global summit in Paris last year, the Christchurch Call has generated some momentum – including the relaunch of the Global Internet Forum to Counter Terrorism as a staffed and funded independent legal entity, with an expanded mandate to counter extremism as well as terrorism.

A new crisis response protocol now encourages quick and effective cooperation between the tech sector and governments in responding to terrorist incidents.

And global support for the Call has grown: as Ardern has highlighted, it’s now backed by 48 countries, three international organisations and eight online service providers.

But there’s clearly a long way to go in building a truly inclusive, effective international framework, especially because of the three critical nations that are not involved: the US, Russia and China.

The Trump administration’s refusal to sign the Christchurch Call weakened it from the start. Some major US tech firms signalled their support – including Microsoft, Facebook and Google – but the absence of the world’s leading nation was a major blow.

One might be tempted to blame President Trump himself, but the US approach was founded in concerns about the impact on the first amendment to the US constitution, which guarantees freedom of speech, and a broader historical and cultural reluctance to regulate the private sector.

The decision was also made against a political backdrop in the US, in which right-wing voices complained about being shut out of mainstream and new media. In a thinly veiled reference to the Christchurch Call, President Trump said:

A free society cannot allow social media giants to silence the voices of the people. And a free people must never, ever be enlisted in the cause of silencing, coercing, cancelling or blacklisting their own neighbours.

Russia and China are also notably absent. Without some of the world’s non-western media companies, such as Weibo and WeChat, the initiative is unlikely to succeed.

Algorithms are not up to the task

A second more technical problem relates to the algorithms used to search for hate speech, violence and terrorist content.

Social media companies rely on these algorithms to funnel content to their users, but they aren’t effective yet in quickly identifying violent extremist content. Facebook has indicated that automated processes still struggle to distinguish between real violence and other content, including footage of real military operations and movies that depict violence.

Reports suggest Facebook is using military footage to train its algorithms to identify terrorist violence online. But the technical capacity to monitor vast amounts of user-generated data is not there yet.

Last year, Facebook’s chief AI scientist Yann LeCun said artificial intelligence is years away from being able to moderate this type of content, particularly when it comes to screening live video.

A third problem is the ongoing growth of right-wing violence and hatred. If social media is a reflection of society, then it is no surprise that extremism continues to flourish online.

Dark social media

The good news is that globally, terrorist incidents have reduced by 52% since 2014, largely due to successes in fighting groups like ISIS and Boko Haram. But far-right violence continues to flourish, with a 320% increase over the past five years.

High-profile attacks inspired by extreme far-right ideology have also continued, with one gunman killing 22 people in El Paso in Texas in August 2019, and an attack in Hanau, Germany, that killed nine people in February this year.

Social media companies are ill-equipped to counter far-right narratives that feed these attacks by distorting perception, sowing division and feeding confirmation bias.

The problem is compounded by the growth in “dark social” networks, including applications like WhatsApp and Snapchat, where users share content without any information provided about the source.

Recent research shows that 77.5% of shares are on dark social media, as opposed to 7.5% on Facebook.

The dark web continues to proliferate too, with the controversial 8Chan site, which was regularly used by hate groups, moving to a network of inaccessible and encrypted servers.

Countries shouldn’t shy away from advocacy on these issues. Small states can be successful advocates for responsible standards and social behaviours. But we’re only at the beginning of a long and complex process of change.

To measure progress, we need to develop clear metrics based on online patterns and trends to assess and sustain the Christchurch Call. This means including a wider range of tech providers and countries – and, just as importantly, dark social and dark web services.

Joe Burton, Senior Lecturer, New Zealand Institute for Security and Crime Science, University of Waikato

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Guest Post content does not necessarily reflect the views of the site or its editor. Guest Post content is offered for discussion and for alternative points of view.