Russian Censors Launch Automatic Online Media Monitoring System to Spot ‘Extremist’ Content · Global Voices
Tanya Lokot

Akin to Foucault's panopticon, Russian censors are now monitoring online media for forbidden content around the clock using an automated system. Images mixed by Tetyana Lokot.
A system that automatically scans the content of online media outlets for forbidden and illegal material is now operating in 19 regions across Russia.
Roscomnadzor has launched the algorithm in beta testing mode, but head of the Russian media watchdog, Aleksandr Zharov, told state news source Izvestia the system would be rolled out full-scale across all of Russia before the end of 2016.
The new system will be primarily used to analyze Russian media websites to spot content that is considered illegal under Article 4 of the Russian mass media law, which covers pornography, calls to violence or terrorism, extremist materials, and profanity. Zharov told Izvestia that even in testing mode, the algorithm has enabled the censors to find “twice as many” violations as before.
“We won't discipline anyone yet, since the system is new. We do ask that violations be removed, as I think it will improve the quality of media all over Russian territory,” Zharov said.
The hardware and software used by Roscomnadzor only analyze textual content on media websites. For images and other multimedia content, the algorithm examines language describing the file, such as captions. If a captions suggests that the file might contain forbidden information, the system sends them on to the watchdog's human analysts who examine the files manually.
Comments on media websites are also part of the automatic monitoring: according to Roscomnadzor, if the system finds comments containing illegal content, they inform the media outlet, which then has 24 hours to delete the offending comment, as prescribed by a decision of the Russian Supreme Court. “And delete them they do,” Zharov said.
Previously, Roscomnadzor had ambitious plans to monitor all of the Russian Internet for extremist materials, but they simply didn't have enough funding. “We realized that given the current situation with available budget funds, we can't take on all of the Internet,” Roscomnadzor's head confessed to Izvestia. “So we analyze the online media and don't touch the rest of the (Internet) space.”
Little is known about the software and hardware behind the automatic monitoring system. According to Roscomnadzor, its “analytical hub” is based in the Russian General radio frequency center, but the algorithm itself remains a mystery.
It was in 2011 that Roscomnadzor first released a tender for developing an automated search and analysis system for use on the RuNet. A company called “DataCenter” won the tender, but Roscomnadzor wasn't happy with their offering and found it to be incompatible with their technical brief.
In the spring of 2014, Russian media again reported on the media regulator's intentions to automate its searches for profane content and extremist material on media websites and social networks. At the time, Roscomnadzor was using a third-party system from the “Safe Internet League” to monitor offensive content such as child pornography and content promoting suicide, but wanted to develop an in-house custom solution to also include monitoring of calls to extremism and violence.
In April 2014, Roscomnadzor head Zharov told the media their system would be “self-learning” and would use “key markers” to spot extremist materials. Zharov said the watchdog had had offers from Russian media analytics powerhouses “Medialogia” and “Ashmanov and partners,” but ended up picking a third (hitherto unknown) company to develop their system.
According to Russian law, Roscomnadzor issues a warning to a media outlet for a first-time offense of publishing forbidden content online. After a second warning to the same media outlet, the media watchdog can take the matter to court and demand that the media outlet's license be revoked.