Recently, India signed the ‘Christchurch Call to Action’, along with 16 other countries and eight tech companies, aimed at preventing online terrorist and violent extremist content.
the Christchurch call was formulated following the Terrorist attacks on the Muslim community. of Christchurch in New Zealand in March 2019 and contains voluntary measures ranging from prioritizing the moderation of violent extremist content by service providers to developing an effective complaints and appeals process in relation to the removal of terrorist content and violent extremist.
Christchurch shooting the consequences.
At the same time, the proposal recognizes the importance of a free, open and secure Internet and respect for human rights, such as freedom of expression and expression. Countries around the world are taking steps to prevent the spread of terrorist and extremist content on online platforms, in April this year. The EU passed the Terrorist Content Online Regulation. after the recent terrorist attacks; closer to home MeitY published draft IT intermediary guidelines to replace existing guidelines and impose greater obligations on Internet intermediaries with respect to illegal content.
Attack on the Christchurch Mosque
The shootings in Christchurch were gruesome, not only because the perpetrator of the video was made by the perpetrator and was streamed live on Facebook and YouTube. Facebook closed the shooter’s account an hour after the attack. and reported that it had removed 1.5 million videos from its platform within 24 hours of the live stream, blocking 1.2 million videos upon upload. Facebook launched a series of measures to address any such event in the future, including the “one attack” policy to prohibit any Facebook user who commits the most serious violations of Facebook policies from using Live (the live video streaming feature of Facebook) for a specified period; This restriction extends to Facebook users who share links to statements by terrorist groups without any context.
Facebook announced that in the future it would also prevent violators from creating ads on Facebook. New Zealand officially has rated the Christchurch mosque attack livestream as objectionable, making it illegal in New Zealand to view, possess or distribute the video in any form, including on social media platforms; the Office of Literature and Film Classification found that the The video was intended to glorify the attacks.and encourage and encourage the audience to perpetrate mass murder. The New Zealand government’s rating decision also contains the analysis of freedom of expression and states that a video ban is justified in these circumstances.
Police stand guard after Friday’s mosque attacks outside Masjid Al Noor in Christchurch
The Christchurch Call-to-Action Commitments
Under the call to Christchurch, the signatories have made voluntary commitments. For governments, these commitments include supporting frameworks (such as industry standards) to ensure that reports of terrorist attacks do not amplify violent extremist and terrorist content, without affecting responsible coverage of such attacks.
Governments, together with online service providers, also commit to supporting smaller platforms to remove violent extremist and terrorist content, for example by sharing relevant databases of hashes; An example is a database developed by Global Internet Forum to Counter Terrorism (GIFCT).
GIFCT is an industry-led initiative that was launched in June 2017 by Google, Facebook, Twitter, and Microsoft. The GIFCT database contains hashes (or the fingerprint of content marked by a member company of the GIFCT consortium as terrorist content); This allows other online platforms to match content uploaded to their platforms against the hashes database and quickly remove matching content from their platforms.
This has proven useful when terrorists upload their content to other websites when they discover that their content has been blocked by the website where the terrorist content was originally uploaded. One of GIFCT’s goals is to work with smaller technology companies. share best practices for disrupting the spread of violent extremist content; this is particularly important in the context of “unbinding” which refers to the practice of terrorists posting links to terrorist content on smaller platforms that lack the expertise and resources of larger platforms to block terrorist content.
In the call to Christchurch, online service providers are committed to taking effective notification and removal measures, prioritizing moderation of violent extremist and terrorist content, including real-time review, and providing complaint and appeal processes for users to Users contest the decision of the online platform to remove or reject User Content Upload.
Risks to freedom of expression inherent in automated content selection
Efforts by technology companies to combat the spread of violent extremist and terrorist content have focused on developing automated tools to filter such content.
In the past, there have been cases where Internet platforms inadvertently removed content (which is intended to spread awareness about war and human rights violations), mistaking it for terrorist and extremist content online.
In August 2017, Thousands of videos were reported documenting YouTube had eliminated atrocities in Syria in its efforts to crack down on violent extremist and terrorist content; This had implications for criminal investigation and prosecution of war crimes. This was the result of YouTube implementing new technology to automatically select and block content that violated YouTube community guidelines.
Representative image. Image: Reuters
While YouTube restored some of these videos upon being notified by the creators, the incident highlights that removal of terrorist content by online platforms cannot be left entirely to machine learning algorithms. One way to avoid false positives. In such cases, users can make the upload contextual by including information in the summary and metadata tags, and by explicitly stating the intention to upload the content.
Notably, The United States refused to sign the call to Christchurch citing concern that it will undermine rights under the First Amendment to the United States Constitution, namely freedom of speech.
Laws that mandate the use of automated tools to filter terrorist content
In April 2019 the European Parliament approved the EU regulation of terrorist content which imposes an obligation on hosting service providers to deactivate or eliminate access to terrorist content one hour after receiving the removal order from a competent authority. Initially, the Regulation also contained a clause that required hosting service providers to use automated means to identify and remove terrorist content; The clause is absent in the present draft Regulation. The Regulation is in the first reading phase in the EU Parliament.
In December 2018, it issued MeitY. Draft interim guidelines to review existing interim guidelines.; The intermediary guidelines provide a safe harbor for online intermediaries for the actions of Internet users conditional on the fulfillment of certain obligations by the Internet intermediaries. The preliminary guidelines require Internet intermediaries to implement automated tools to proactively identify and remove public access to illegal content. The proposed clause has been criticized by civil society organizations for its implications on freedom of expression and expression, as it may result in excessive internet censorship by internet intermediaries. Additionally, smaller platforms may not have the capacity and resources to develop such tools to comply with the law.
In cases of extremist and violent terrorist content, it is imperative to remove such content expeditiously to prevent terrorists from advancing forces to further their cause; Given the scale at which content is disseminated online, governments must act quickly to stop its dissemination, making it desirable to have automatic means to quickly identify and remove terrorist content. The commitments in the Christchurch call are not binding on governments.
However, to enforce the call from Christchurch regarding the real-time review of blacklisted content, the government of India may discuss the use of automated tools to remove terrorist content. It is important that the law on intermediary liability in India explains freedom of speech and expression; Governments should require Internet intermediaries, especially smaller platforms, to use automated tools to filter violent extremist and terrorist content only when it is technically and economically feasible for them to do so.
The author writes on technology policy and has an LL.M. Cambridge degree
Tech2 is now on WhatsApp. For all the latest science and technology information, subscribe to our WhatsApp services. Just go to Tech2.com/Whatsapp and hit the Subscribe button.