So You Want to Conduct Open-Source Research

"Dreaming" by a4gpa.  October 3, 2009. CC 2.0. Edited by Kevin Rothrock.

“Dreaming” by a4gpa. October 3, 2009. CC 2.0. Edited by Kevin Rothrock.

This article is part of a larger guidebook by RuNet Echo to help people learn how to conduct open-source research on the Russian Internet. Explore the complete guidebook at the special project page.

Before diving into the specifics of investigating open-source data on the RuNet (Russian-language Internet), it's useful to understand the general verification processes that are applicable to all sources of open-source intelligence (OSINT). For a comprehensive treatment of the verification of digital open-source intelligence, you may want to see the Global Voices social media verification guide or the Verification Handbook, written by journalists from a variety of traditional media outlets, such as the BBC, and emerging projects, like Storyful.

These general instructions address specific ways to assess the reliability of photographs, videos, and human sources. While most of the advice applies to any area, we focus specifically on Russian-language evidence and sources.


There are countless ways to verify an image, depending on the various elements of the source. One of the most common ways to verify a photograph is by geolocating the image, which involves confirming or rejecting a photograph's location by matching visible elements to outside sources, such as road signs, topographical features, and landmarks. Another way to verify claims about an image is to verify the time that it was taken, either through seasonal differences (for example, is snow visible in a photograph supposedly taken on the day after reports of heavy snow?) or specific times, by measuring observable shadows. These two verification methods—geolocation and “temporal” location—will be covered in extensive detail in a future installment of this series.

As detailed in the two broad verification guidebooks cited in our introduction, the quickest way to verify a photograph’s uniqueness is by conducting a reverse Google Image Search or a TinEye search. Let's take a couple of Russia and Ukraine-specific examples to verify media content through these reverse image searches.

As described by StopFake and other outlets, several pro-Russian and separatist media outlets have used decades-old photographs to depict what they claim are recent events in the the war in eastern Ukraine. For example, in the below article on the pro-separatist website (meaning “Russian Spring”), a photograph of military equipment on fire is juxtaposed with a story about fighting in eastern Ukraine on June 25. The photograph is presented without context, and two logos are plastered onto it, including the Web address of, implying that the site’s photographers recorded the image themselves or hold the rights to the photograph.


This photograph, as many have pointed out, is actually from Tiananmen Square in 1989. A reverse Google Image search of this image will reveal this fact, when the search dates are restricted to resulted published before the Ukrainian conflict.

How do you conduct such a search?

Go to, and click the camera icon, in order to search by image:


You may need to take a screenshot of the photograph you want to search. Either upload the photograph (a screenshot or the actual saved file will both work) or enter a URL that directly leads back to the image file (such as .jpg, .gif, .png—just not the entire webpage that hosts the image). When conducting a reverse image search, be sure to find the highest resolution copy available, as it will bring the most results.



After modifying the search's date parameters, it becomes clear that this image likely comes from China and appeared online before the conflict in eastern Ukraine.


TinEye works roughly the same as a reverse Google Image Search, but will usually return fewer results. The mechanisms are the same as Google Image Search, where you can either upload an image file or provide a URL.


There are various free tools that will analyze a photograph’s metadata and compression information, allowing further verification of an image’s veracity. The results are not always clear, however, and depend on the copy of the file uploaded. For example, a JPEG that has been resized, recompressed, and changed from the original file will have much less reliable data than the original full-resolution image recorded by a camera. Still, if you would like to try to analyze an image through ELA (error-level analysis) or its metadata, visit,, and Izitru.

Instead of digital forensic tools, you can also rely on the naked eye. The easiest way to detect a Photoshopped image is by examining an image's shadows and reflections. The Moscow Times, StopFake, and many others have noted that Russia’s Investigative Committee used a Photoshopped image of Donetsk in flames on the cover of a book entitled The Tragedy of Southeast Ukraine. When examining this image—which was also used in a news article on the site—it is clear that the reflections in the water are incorrect.


The incorrect reflections in the water are evident across from the rising flames, especially the leftmost column of fire and smoke. Note how a building is reflected in the water beneath the left-most flame, while the fire is not. As discovered by StopFake, the flames were artificially added to a normal picture of the Donetsk cityscape.

Just as with many hoaxes in the English-language corners of the Internet, a simple reverse Google Image Search or critical eye can stop many falsehoods in their tracks.


Unlike images, there are no services available for reverse video lookup. This means that when you find a video, there is no fast solution for verifying its original source. However, there are still ways to carry out a reverse-verification of a video to see if supposedly new evidence is actually rehashed material.

The closest thing to a reverse video search is by capturing a screenshot of the video at a particular moment, and then conducting a reverse Google Image search of that screenshot. Additionally, you can take a screenshot of the video’s thumbnail and reverse search it, hoping to find a copy of the video elsewhere on YouTube or another video hosting service. A more detailed description of how YouTube generates its thumbnails can be found here.

This video on Vkontakte, Russia's most popular social network, is titled “SHOCK!! July 19, 2014 at 3pm, a BRUTAL shelling from a GRAD of the city of Gukovo.” The video, posted on July 19, 2014, is apparently from Russian state television channel Rossiya 2, and shows soldiers loading missiles into a Grad multiple launch rocket system and firing them. Without any specific identifying features, it is hard to verify the time, location, and original source of this video based on context clues alone. However, by taking a screenshot of particular moments in the video, we can find the original video source.

The best places to take screenshots are the very beginning and very important or unique moments. The first video frame does not return any helpful results, but a screenshot of the moment of the missile launch yields multiple search results:



Following the first link gives us a video uploaded in October 2013 of Russian soldiers conducting training exercises in 2009 with the Grad multiple-launch rocket system. Clearly, this video does not show Ukrainian soldiers shelling Russia on July 19, 2014, as the original user claimed.

Once you verify that a video is unique, the location and time of the photograph should be verified to greatest extent possible. In April 2015, wildfires ravaged much of Siberia, including the region known as the Zabaykalsky Krai. A video emerged on YouTube and Russian social networks, including Odnoklassnki, showing a hellish scene of cars driving through thick smoke. Reverse image searches did not reveal any duplicates of this video after it was posted on April 14, meaning this video could not be verified debunked using this method alone. The time stamp on the video itself checks out, showing April 13, 2015; however, this information could be modified or digitally added, and is thus not a reliable means of verification. An inspection of the license plates in the video showed that the code was 75 (top right, after НК):


Cross-referencing the license plates codes in Russia shows that this code is indeed local to the Zabaykalsky Krai, lending additional credibility to the video. Additionally, the user who uploaded the video to YouTube has one other upload from months before the wildfire video—a local news broadcast segment for Khakassia, located not far from the Zabaykalsky Krai. With all of these factors combined, there is no open-source information available that leads us to think this video was fabricated or falsely represented.

Vetting Sources

Verifying the reliability of sources on social media is not too different on the RuNet than it is with the rest of the Internet, but there are some special considerations one should make. While there are a number of “bots” that send out identical tweets in every language, there is an especially large concentration of them on Twitter writing in Russian. This trend is particularly true with politicized events, as both pro-Ukrainian and pro-Russian headlines are spammed across hashtags and keywords, making open-source investigation more difficult.

In the screenshot below, we see that an identical tweet has been sent by dozens of Twitter bots at the same time. The tweet reads “A source says a bomb threat has been called in at two hotels in Moscow,” with a number of people with generic Russian names spreading the information.


This bomb threat did actually take place, but these bots are not reliable sources of information. So, how do you find out if a source is a real person, or just a bot? Other than the context clues of the account’s tweets, such as sending out thousands of tweets with only a handful of followers, you can reverse image search the user's avatar.

Most bots and other unreliable sources—both in Russia and elsewhere—will use stolen photographs of other people as their avatars. For the Twitter bot @PatnietecnAmwa (or “Mashka Ya,” indicating that the woman’s name is Mariya), a quick investigation of the account's avatar reveals that the Twitter account is likely not operated by a human.


After copying the URL of the avatar (or just clicking “Search Google for this image” in the Chrome Web browser), paste the link into a reverse Google image search to find similar pictures elsewhere online.


Here, we see that the avatar of “Mashka” is actually a woman named Dasha Makarova, and that the Twitter user’s avatar was almost certainly stolen from the woman’s VKontakte page.

To find the original source of this tweet, search the headline on Twitter and find the first person who tweeted those exact words on the day of the flood of bot tweets. The original source of this tweet comes from @_ainequinn, a more reliable user than the other bots, as he has over a thousand followers.


However, we still must verify the information in this tweet, even if the source seems to be reliable. The photograph shared by this user shows a scene that seems to fit the situation: emergency responders, a bomb-sniffing dog, and a truck with the word “MOSCOW” (Москва) on its side. A reverse image search reveals, however, that this photograph is commonly reused in reports about emergency responses in Moscow, appearing in various news stories as far back as 2011.


While the more human-seeming Twitter account appears to be a more reliable source of information than a horde of bot accounts, the photograph that accompanied the tweet is generically used and not truly from the scene of the bomb threat. Thus, as we see from these examples, it is always necessary to verify specific pieces of information using all available means.

The next installment in this guidebook will be an introduction to Russian social networks and the unique Russian and Ukrainian usage of Facebook and Twitter.

1 comment

Join the conversation

Authors, please log in »


  • All comments are reviewed by a moderator. Do not submit your comment more than once or it may be identified as spam.
  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.