Global Voices is proud to publish writing, translation, and illustrations created by people, for people, and we expect our contributors to uphold that standard.
As the use of tools based on large language models (LLMs) such as ChatGPT, DeepSeek and Midjourney, and other forms of generative “AI” become more common and more integrated into existing platforms and operating systems, it is important for Global Voices, as a media organization, to lay out a clear policy on their use on our site.
We have two primary reasons for limiting the use of LLMs in our work, both tied to our core mission:
- Trust. Large language models are not programmed to be factual. While their probability models sometimes result in the correct information, that is not one of their objectives, and examples of entirely false responses abound. As a news organization that abides by journalistic principles, we strive to report facts. Relying on LLMs to do the writing or translation, or programs that generate graphics or illustrations based on a prompt, would risk violating those principles, as well as eroding the trust that we ask from our readers.
- Amplifying unheard voices. LLMs use existing data — for example, the digital texts they have been trained on — to calculate likely responses. The results they produce are therefore biased towards the most popular and abundant data available online. This, in the long term, has the effect of pushing the internet towards homogeneity, and minimizing and erasing outliers, including the less-heard voices that we as an organization are committed to amplifying. We strive to publish ideas and knowledge even — indeed, especially — if they don’t quite fit with the standard language or the standard ways of thinking. That’s far more important for us than something that “sounds” right but doesn’t express a viewpoint built from the experiences and insights of an individual human.
Beyond these reasons are other important considerations: the environmental impact of LLMs; the copyright issues raised by both their training and their outputs, and respect for the authors and illustrators whose work was used without permission; and the erasure of the humans who work on smoothing their processes, often for low pay and under poor conditions.
Finding a balance
We understand that LLMs seem to offer an easy way to write, draw, or translate, and that they are especially convenient for people writing in what might not be their first language, or needing to conform to unfamiliar style rules or publication standards. We also acknowledge that LLMs, and other technologies grouped under the banner “AI”, are not always easily distinguished, but rather a spectrum that ranges from spelling and grammar checkers to prompt-based outputs, and that they’re often bundled in with other programs, making it difficult to fully work without them.
Moreover, we recognize that writers, illustrators, and translators have long used various tools to support their work, such as dictionaries, thesauruses, style guides, reference drawings/photos, and so on. Some of the applications referred to as “AI” work in a similar way. Translators might look up possible words in an online translator instead of reaching for a dictionary; writers might run spell check or search for synonyms.
However, just as we would not accept an article copied from an encyclopedia or Wikipedia (even with a few of the words changed), or a translation that replaced each word with the result of a dictionary search, we will also not accept writing, translation, or illustration that is automatically generated.
As a decentralized organization, we must be able to place a high degree of trust in our contributors. There are no existing technologies that can reliably identify LLM-generated writing or translation, or prompt-generated illustrations, and the ones that purport to do so are often plagued by the same ethical and environmental problems as LLMs themselves. We therefore reserve the right to subject writing, translation, or illustrations that show hallmarks of dependence on these technologies to a strict review, and ask for re-writes, reject publication, or remove articles altogether, as appropriate, should we find that these tools have been utilized in ways that contradict our mission.
Finally, we know that the technology is changing rapidly; this policy will probably need to change as well, and we will update it as needed.