See all those languages up there? We translate Global Voices stories to make the world's citizen media available to everyone.

Learn more about Lingua Translation  »

Netizen Report: ‘Terrorist Threat’ or Political Speech? States Target Social Media Post-Paris

Demonstrators in San Francisco, USA show solidarity with protesters in Egypt, February 2011. Photo by Steve Rhodes (CC BY-NC-ND 2.0)

Demonstrators in San Francisco, USA show solidarity with protesters in Egypt in February 2011. Photo by Steve Rhodes (CC BY-NC-ND 2.0)

Global Voices Advocacy's Netizen Report offers an international snapshot of challenges, victories, and emerging trends in Internet rights around the world.

While the specter of ISIS is nothing new for much of the Arab region, the November attacks on Paris and Beirut brought heightened urgency to ongoing global debates on violent extremism. As in the past, governments have targeted social media, seeking to extinguish online activities of extremist organizations, but also of non-violent political activists.

Saudi Arabian officials have threatened to sue anyone who likens the oil-rich kingdom's penal system to that of ISIS on social media. This comes on the heels of a death sentence handed to poet and artist Ashraf Fayadh last month. The sentence was criticized far and wide, with many taking to social media and comparing Saudi Arabia's penal code and punishments to those of ISIS. Using the hashtag #sosuemesaudi, critics pointed to various parallels between the two entities, including their issuance of death sentences for individuals convicted of adultery, treason, blasphemy, and acts of homosexuality.

Meanwhile, Russian social media user Oleg Novozhenin was sentenced one year in a penal colony after a local municipal court found him guilty of distributing “extremist materials” on social networks. Local media reported that Novozhenin had posted files “promoting the activity” of Ukrainian nationalist party organization “Right Sector,” which is currently banned in Russia.

And in Bangladesh, authorities have continued the ban on Facebook, Viber, WhatsApp and various other social messaging applications that began on November 18. The ban was instituted on “security grounds,” in anticipation of a Supreme Court ruling that upheld death sentences for two political figures convicted of genocide and rape during Bangladesh's 1971 war for independence from Pakistan. The defendants’ political supporters, some of whom belong to violent extremist groups, had threatened public unrest in response to the sentences. When citizens turned to VPNs, proxy networks and tools like the Tor browser in order to access social platforms, State Minister for Post and Telecommunications Tarana Halim publicly condemned these activities as “illegal.” Users have reported that their telecommunications providers are warning them of heightened monitoring of the use of such tools.

Technology experts and cryptographers around the world are also being asked to change their systems in an effort to stop or monitor the online activities of violent extremist groups. In a recent blog post, Lebanese developer Nadim Kobessi, who is best known as the lead developer for the CryptoCat encrypted chat program, reflected on such questions coming from both state actors and media alike:

A simple mention of my encryption software in an Arabic-speaking forum is enough to put me on the receiving end of press inquiries such as “are you aware of any terrorists using your software? Do you feel it’s your responsibility to monitor terrorist activity?”

In this rush to blame a field that is largely unknowable to the public and therefore at once alluring and terrifying, little attention has been paid to facts: The Paris terrorists did not use encryption, but coordinated over SMS, one of the easiest to monitor methods of digital communication. They were still not caught, indicating a failure in human intelligence and not in a capacity for digital surveillance.

China cuts mobile phone service for ethnic minorities

Users of circumvention technologies in China’s Xinjiang province may have their mobile phone accounts shut down, with several affected users reporting receiving text messages saying “Due to police notice, we will shut down your cellphone number within the next two hours in accordance with the law.” People who use VPNs, foreign messaging software like WhatsApp or Telegram, or who have not linked their identities to their accounts have been implicated, according to the New York Times. Xinjiang, a province in the west of China, has frequently served as a testing ground for state censorship and surveillance practices. In the past, the government has shut down Xinjiang’s Internet services following periods of violence between its Uighur ethnic minority and majority Han Chinese.

Indonesia struggles to define hate speech

Indonesian police issued new recommendations on the management of hate speech, linking it to the criminal code in a manner some are concerned could present a threat to free expression. According to Inspector General Anton Charliyan, head of the National Police’s Public Relations Division, “We cannot afford to let these new technologies and digital tools to be misused and abused.” Legal aid group LBH Pers responded: “The application of the law here could result in wrongful arrest…it is better to separate defamation from hate speech, so that police officers don’t end up abusing their authority.”

Russia tightens restrictions on mobile SIM cards

Russia may soon make buying a mobile SIM card even more difficult than it already is. It's currently illegal to purchase SIM cards anonymously. Citing a “growing terrorist threat”, authorities may limit the term of mobile service contracts for foreigners in Russia, tying the capacity to extend service contracts to paperwork that confirms they are allowed to extend their stay in the country. Law enforcement agencies are seeking tighter restrictions on sales of pre-paid mobile services that do not require signing a contract with a mobile provider.

Google goes to bat for fair use

Google announced plans to provide legal support to several YouTube videos that were taken down due to DMCA copyright claims, in what the company is calling an attempt to “protect some of the best examples of fair use on YouTube”. Their hope is to develop a “demo reel” for the site to help users and copyright owners better understand what fair use looks like online. Currently, YouTube uses a digital fingerprinting system called ContentID to identify infringing content on the site by scanning and automatically taking down files submitted to the website by content owners, but it’s unclear whether this system would be impacted by the move.

New Research

Ellery Roberts Biddle, Sam Kellogg, Weiping Li, Hae-in Lim, and Sarah Myers West contributed to this report.

Subscribe to the Netizen Report by email

Receive great stories from around the world directly in your inbox.

Sign up to receive the best of Global Voices
Email Frequency

No thanks, show me the site