Verifying the 2020 presidential elections: An interview with the Taiwan Fact Check Center · Global Voices
Huang HungYu

Screen capture from Taiwan Fact Check Center.
Presidential election debates are usually occasions for candidates to explain their policy platforms and hone their presentation skills, but very few viewers actually check on the factual details of their speeches. However, this year's Taiwanese presidential election was different. For the first time, the Taiwan Fact Check Center fact-checked the live, televised 2020 presidential debates on December 29, 2019.
Fact-checking outlets are on the rise around the globe in response to misinformation, and Taiwan has also caught up with the trend. The Taiwan Fact Check Center was established in July 2018, and since its inception, it has conducted fact-checks on controversial topics, establishing its credibility among Taiwanese people. After a live debate among the three presidential candidates (current and newly re-elected Taiwan president Tsai Ing-wen, Kaohsiung Mayor Han Kuo-yu and Chairman of the People First Party James Soong) the organization faced real challenges as it attempted to release a fact-checking report on the same night of the live debate.
Global Voices’ Chinese language editor Huang Hung Yu interviewed Summer Chen, the chief editor of the Fact Check Center, to learn more about the debate night.
Global Voices (GV): Why did the center decide to conduct simultaneous fact-checking on the presidential debate?
Summer Chen (SC): Public affairs are one of the major concerns for the Taiwan Fact Check Center. During the presidential election debate, candidates explain their stances and policy platforms on a number of social and political issues. All of these issues are within the arena of public affairs.
GV: Did you consult other overseas fact-checking organizations in preparation for the project?
SC: Last October, we contacted other fact check organizations in the 2019 forum on professional fact-checking in Asia. We found out that many overseas fact-checking organizations (in Indonesia, for example) collaborate with local political beat journalists to conduct presidential debate fact-checking as they are more familiar with political news and have ties with political analysts.
Originally, we also planned to work with other journalists, but ended up doing the fact-checking on our own, as we needed to set a standard, a workflow and topics for the team. It would indeed take too much time to communicate with other organizations on the common framework which is used to analyse other election topics in addition to the presidential debate.
GV: How did the center prepare to fact-check the presidential debate?
SC: We made a list of potential debate topics based on their previous policy speeches: the 2020 presidential election policy presentations, the 2018 Kaohsiung Mayoral election debate and previous presidential election debates. The topics were economics, public security, transportation, culture, military affairs, cross-strait relations etc. Then we divided the topics into content that could, or could not be verified.
GV: How do you decide what can be verified?
SC: If the content is backed by official and authentic documents or has been reported by multiple media outlets, it can be verified. If it comes from an opinion report, we have to look for other media sources. For example, when we fact-checked a claim that “Taiwan legislators sing in Cross-Strait exchange trip in China”, we gathered news reports from the Chinese government-affiliated “China Review News” as well as reports from Taiwan United Daily News, Liberal Daily or Apple Daily as cross-references. And we only checked the factual aspects of the resports, not the opinion parts.
If the content is not backed by facts or historical events we cannot verify it. For example, the claim that “Han Kuo-yu is the biggest beneficiary of the internet army” cannot be verified, as there is no academic study or research about the exact definition of “internet army” (trolls). We cannot make up the definition.
In addition, we also analyzed the presentation style of the presidential candidates. Since Tsai Ing-wen is more careful in her presentation, much of her content can be checked. As for Han, his presentation is tainted with strong opinions and we can only focus on specific topics to verify. Hence, we cannot compare the credibility of the two candidates based on the number of their mistakes. Mistakes cannot be measured like this.
GV: How did you conduct fact-checking simultaneously during the television debate?
SC: We had two fact-checkers taking turns listening to and transcribing the debate. Another verifier picked out the topics in the transcript, and the chief editor then decided if the content should go through the verification process. Once I made the decision to proceed, another reporter began the fact-checking process, first by looking into the reports and documents that we had prepared for the topics. If we did not have related information in our documentation, the reporter had to find another way to verify the content.
As our verification report was written by the team in a shared document, the chief editor did the first proofread. Then Hu Yuan-hui, our center's advisor and professor at the Journalism and Communication Department of the National Chung Cheng University, went through a second round of proofreading. In the process, if professor Hu decided that the report needed more backup information, he would work with the team to fill in the gaps. Finally, the report was published online.
GV: For the presidential election debate, the center used a different set of categories to flag or label verification results. Why did you make this adjustment?
SC: Usually we use “wrong”, “correct”, “partially wrong” to flag verified content. During our preparation, we felt that the “right” and “wrong” labels may generate an anchoring bias for the audience. Hence, we changed the flagging system with nine other labels including “consistent”, “inconsistent”, “partially inconsistent”, “inconsistent and non-backed claim”, “impartial”, “non-backed claim”, “position shifted”, “correct but claim non-backed”. Our hope is that the audience can think deeper about the content and make their own judgement.
GV: What challenges did you encounter during the verification process?
SC: During the elections, government bodies released a lot of data to backup certain claims. For example, the Kaohsiung city government released its achievement report to backup Han Kuo-yu’s speech. The Executive Yuan, the Ministry of Internal Affairs and other administration bodies had also released data during the election. But in the past, they did not release regular data reports and hence we could not compare with past records.
For example when we checked on Han Kuo-yu’s claim that 40,000 factories and companies had been shut down in 2019, the Ministry of Economic Affairs had given us data as clarification. However, it had not provided us with previous data nor explained to us the reason behind the shut down, hence we flagged the content as “no previous data to back the comparison”.
GV: How did netizens react to the verification reports?
SC: After we published our report that night, netizens gave us a lot of feedback. In particular, audience members with professional backgrounds, including legal experts, left comments and pointed out some of our errors. We had to go through another verification process in response to some of the comments. Some netizens asked why we did not work on certain topics as well. After an internal discussion, we chose some of the suggested topics and published another round of verification reports on December 31.