Wars of the 21st century are often describe as high-tech and increasingly dependent on Artificial Intelligence (AI): US occupation of Afghanistan, Russia's invasion of Ukraine — though the latest Hamas attack on Israel also shows that over-reliability on drones, data and digital tools can create devastating security loopholes.
To understand how AI shapes an asymmetric war in Ukraine against a much larger Russian aggressor, Global Voices talked to Anton Tarasyuk, a data and AI expert, who is also the co-founder and Expertise Lead of Mantis Analytics, a Kyiv-based company. He is also a journalist and writes commentaries on current affairs.
The interview took place over email after an in-person meeting in Kyiv, and is edited for style and brevity.
Filip Noubel (FN) How is AI used in disinformation warfare by Ukrainian actors? Who is using it — is it just the government or other players too?
Anton Tarasyuk (AT): Let's begin by acknowledging that the Russo-Ukrainian war stands as the most digitized and AI-dependent conflict in history. AI plays a significant role both on the battlefield and in the information domain. In fact, the full-scale war initially commenced not through kinetic warfare, but through information warfare. Before February 24, 2022, Russia was actively launching cyberattacks and hostile information campaigns in an attempt to sow confusion, fear, and distrust among Ukrainian citizens.
Were they successful? Strategically, the unfolding of the war indicates that they were not. However, especially in the early stages, they achieved tactical successes. Take, for instance, the “graffiti marks” campaign, which falsely claimed that supposed Russian agents were leaving specific graffiti marks on the streets to guide Russian artillery and air strikes. This may sound absurd, but people were indeed searching for these marks, and individuals could get into trouble if they were mistakenly associated with these imaginary “marks.” It has since become evident that it was a disinformation campaign, presumably of Russian origin, designed to spread chaos and panic.
The role of AI in such disinformation campaigns is substantial. With generative technologies, the cost of producing textual and visual content has been reduced to almost nothing, requiring only basic digital literacy. In such a context, only AI can effectively combat AI. No human mind can operate at the required speed. Therefore, there is no choice but to employ these technologies.
Regarding the users of our technology, we initially focused on the governmental sector, specifically the National Security and Defense Council of Ukraine. However, our primary aim is not limited to the government or security institutions. Information warfare is rapidly extending into the corporate sector. Billions of dollars are lost annually due to disinformation campaigns, fraud relying on deep fakes, and similar activities, all fueled by AI. The whole informational infrastructure which secures our informed decision-making in defense, politics and business, is under risk. This is why we are actively expanding our reach into the corporate sector.
FN: How is it used by Russia to impose its narratives not just in Ukraine, but elsewhere, including in the Global South?
AT: Many of the countries in the Global South are, with all due reservations, electoral democracies. This means that public opinion matters. It means that influencing public opinion is crucial. This, in turn, underscores the importance of informational campaigning. When you combine all of these factors with the historical skepticism toward the so-called “West,” it creates the perfect ground for Russians to disseminate their messaging.
Based on our assessment, Russia's informational campaigns generally revolve around fostering anti-Western sentiment, often framing Russia as the leader of an elusive “anti-Western coalition.” Remember Putin's sudden emergence with his “anti-colonial” rhetoric? It serves as a narrative tailored for the Global South.
To promote this messaging, Russia employs various tactics, not only resorting to malinformation (information that is both true and harmful) but also resorting to full-blown disinformation (information that is both false and harmful). Regarding Ukraine, the content they disseminate there is simply outrageous.
AI is poised to assume an increasingly pivotal role in this dissemination, especially as the traditional media infrastructure continues to erode. Is the West prepared? It had better be, before it's too late. Some hints to the solutions lie here, in Ukraine, since we were first to address the challenge with Russian aggression.
FN: What about face recognition? In which context is it used and do you see possible danger zones in its use?
AT: To be candid, in today's landscape, if you seek in-depth intelligence, you can uncover a wealth of insights by attentively monitoring what people are discussing, where they are doing so, and in what manner. This is precisely what we specialize in.
Visual content analysis will raise challenging questions, the ones you may categorize as “danger zones.” Paris-based exiled Russian economist Sergei Guriev aptly terms certain countries “informational autocracies” or “spin-dictatorships,” which has been the Russian model for manipulating the informational landscape.
The question for us is: what's the use of opposing these informational autocracies if we are simultaneously building a surveillance state? Hence, our commitment to democratic values. We are indeed very serious about it.
Nevertheless, and here's my futuristic prediction, some of our fundamental beliefs regarding transparency will need to be reimagined in the near future. Not because of the will of some company or institution, but because of what German philosopher Jürgen Habermas calls “The structural transformation of the public sphere,” caused by new technologies. Societies should prepare themselves for this shift.
FN: Is there up-to-date legislation in Ukraine about the use and control of AI?
AT: For a considerable period, Ukraine operated as a frontier of minimal AI regulation. Currently, this subject is garnering increasing attention and debate. The Ministry of Digital Transformation recently unveiled a roadmap for AI regulation.
Given that many emerging technologies, particularly in the realm of defense tech, heavily rely on AI, the need to align our regulatory frameworks with those of the EU and the US is becoming progressively urgent. These considerations are far from abstract, as even fundamental business aspects such as securing venture capital may be contingent on this alignment.
What is clear is that we must chart a course that fosters AI innovation without stifling it through overregulation, which some experts argue characterizes the EU market. While the Silicon Valley mantra of “move fast and break things” may be too perilous for our context, as any disruption during times of conflict can carry life-threatening consequences, we also recognize the risk of China and Russia's centralized, top-down AI regulations.
Navigating a path forward for Ukraine presents a challenge. Will we rise to meet it? I am confident that we will, for we have no other choice.