Violent Extremism, Christchurch Call & Al Paso

  May 19, 2020

Violent Extremism, Christchurch Call & Al Paso

What is violent extremism?

Violent extremism is becoming widespread. It means beliefs and actions of people who support or use ideologically-motivated violence to further radical ideological, religious, or political aims. Violent extremist views can manifest in connection with a range of issues, including politics, religion and gender relations. No society, religious community, or worldview is immune to violent extremism.

How is radicalization of people connected to it?

Radicalisation means the process through which an individual or a group comes to regard violence as a legitimate and a desirable means of action for a political goal.

What is a fertile ground for radicalization?

There are socio-economic, psychological, and institutional factors that lead to violent extremism. There are push factors and pull factors that go to radicalise.

 “Push Factors” drive individuals to violent extremism, such as: marginalization, inequality, discrimination, persecution or the perception thereof; limited access to quality and relevant education; the denial of rights and civil liberties; and other environmental, historical and socio-economic grievances.

“Pull Factors” nurture the appeal of violent extremism, for example: the existence of well-organized violent extremist groups with compelling discourses and effective programs that are providing services, revenue and/or employment in exchange for membership. Groups can also lure new members by providing outlets for grievances and promise of adventure and freedom. Furthermore, these groups appear to offer spiritual comfort, “a place to belong” and a supportive social network.

How is the social media involved?

There is a fast growing link between the Internet, social media, and violent radicalization: mainly through the dissemination of information and propaganda, as well as the engagement with audience that is interested in radical and violent messages.

Due to the convenience, affordability, and broad reach of social media platforms such as YouTube, Facebook and Twitter, terrorist groups and individuals have increasingly used social media to further their goals and spread their message. Attempts have been made by various governments and agencies to thwart the use of social media by terrorist organizations.

Terror groups take to social media because it’s cheap, accessible, and facilitates quick to a lot of people. Social media allow them to engage with their networks. In the past it wasn’t so easy for these groups to engage with the people they wanted to whereas social media allows terrorists to release their messages right to their intended audience and interact with them in real time.

ISIS uses these sites for radicalisation. Western domestic terrorists also use social media and technology to spread their ideas.

Terrorist organizations have used social media platforms such as Facebook, Instagram and Twitter for their propaganda campaigns, and to plan terrorist attacks against civilians. Far right groups, including anti-refugee extremists in US, Germany, New Zealand and other countries are also increasingly exploiting tech platforms to espouse anti-immigrant views and demonize minorities.

The examples of Christchurch shootings in March 2019 in New Zealand and Al Paso massacre in USA in August need to be understood deeply as to what types of danger terrorists using social media pose.

What are the big tech companies doing to counter this?

Due to the growing political will within Western countries to regulate social media companies, many tech titans are arguing they can self-regulate—and that artificial intelligence (AI) is one of the key tools to curtail online hate. Tech companies are painfully aware of the malicious use of their platforms.

In 2017, Facebook, Microsoft, Twitter and YouTube announced the formation of the Global Internet Forum to Counter Terrorism, which aims to disrupt extremist activities online. Twitter claims it used AI to take down more than 300,000 terrorist-related accounts in the first half of 2017.

Facebook itself acknowledges that it is struggling to use make use of AI in an efficient manner on issues surrounding hate speech.

How is the Christchurch Call expected to take the fight forward?

Amazon, Facebook, Google, Microsoft and Twitter adopted a nine-point action plan at a summit with world leaders in Paris in May 2019.

The "Christchurch Call to Action", spearheaded by New Zealand Prime Minister Jacinda Ardern came two months after the deadly attack on mosques in Christchurch, in which 51 people were killed.

The attack was live-streamed on Facebook - and the footage was widely shared - sparking wide-ranging condemnation of social media networks' ability to control the content shared on their platforms.

Christchurch Call expands on the Global Internet Forum to Counter Terrorism (GIFCT), and builds on our other initiatives with government and civil society to prevent the dissemination of terrorist and violent extremist content.

Facebook announced it would set new rules for its live-streaming feature but critics suggested Facebook could not be trusted to regulate itself:  inadequacy and lack of credibility in the self-regulatory approach adopted by the largest platforms justify public intervention to make them more responsible.

Tech companies are committed to continued investment in technology that improves our capability to detect and remove this content from internet, updates to individual terms of use, and more transparency for content policies and removals.

Previous attempt did not succeed. What is different about this one?

While previous efforts to fight online "extremism" have not found great success "the Christchurch Call is different" as the aim of the Christchurch Call is to be more specific about removing the content online and both the governments and the tech companies are meeting together.

The distribution of "extremist" content will now be expressly prohibited by the terms and conditions of use agreements which users must agree before using the platforms. New tools for users to report violent content will also be built, the companies agreed.

The five firms have said they will for the first time work together on "crisis protocols", establishing with governments and other organisations a set of rules for responding to "active terror events".

What happened at El Paso?

In August 2019, a white supremacist walked into an El Paso Walmart and shot dead 22 people from both sides of the U.S.-Mexico border. Minutes before, a note written by the shooter that spoke of a “Hispanic invasion of Texas” was posted on a website called 8chan. There are so many such sites and some big tech companies have not even removed such content.

Big Tech needs to take more responsibility to limit violence that begins online.  Radicalization isn’t happening on fringe platforms like the one used by the El Paso shooter. Extremists in all forms can easily exploit the reach, scale, and openness of even the most popular social media platforms like Facebook and YouTube, using them as a tool to recruit other extremists and spread hate.

India is also facing these challenges through whatsapp hate campaigns that have led to people being killed by mobs.