Monitoring Online Discourse During An Election: A 5-Part Series

PART I: INTRODUCTION

Online interference with elections is a reality of 21st century politics. Social media disinformation campaigns have targeted citizens of democracies across the globe and impacted public perceptions and poll results, often dramatically.

Disinformation: False information spread with the intent to deceive.

Misinformation: Inaccurate information spread without an intent to deceive.

Political campaigns, some less committed to accuracy than in the past, pay for online ads microtargeting particular demographics. Fake news, deliberately fabricated, edges into Facebook feeds alongside legitimate sources, and is shared by unsuspecting users. False Twitter accounts press extreme views, shifting and polarizing public discourse. Fake news spreads rapidly; according to Elodie Vialle of Reporters Without Borders, false information spreads six times faster than accurate information.[1]

“Domestic and international, state and non-state actors manipulate information online in order to shape voters’ choices or simply confuse and disorient citizens, paralyze democratic debate and undermine confidence in electoral processes.”[2]

This phenomenon has developed so rapidly, and is so pervasive, that it has been hard for legislators to know how to regulate it. NGOs and governmental agencies have stepped into the gap. Their primary weapon is social media analytics, powered by AI. Data science can locate trolls and fraudulent accounts: via algorithm, programs can be trained to identify potential bots and unusual political material.[3]

Many of these initiatives are based in the European Union. They track disinformation produced by local and/or foreign actors. Here are a few such organizations, with a brief summary of their work:

Debunk.eu

 

(Lithuanian)

Uses AI to analyse 20,000 online articles a day, using variables such as keywords, social interaction, and publication across multiple domains. The 2% deemed most likely to be disinformation are then analysed by volunteer fact-checkers, and journalists write rebuttals of the most egregious.
Prague Security Studies Institute

 

(Czech Republic)

Uses the web crawler Versus to monitor suspicious sites, starting four to five weeks before an election; manual coders then analyse the content using variables such as message type, sentiment, and number of shares. The Institute produces weekly summaries of its findings, which are distributed to media outlets.
Computational Propaganda Project at the Oxford Internet Institute

 

(UK)

Focuses on disinformation campaigns and social media bots. Its in-house platform scrapes public posts, which are then classified by human coders who have familiarity with the monitored state’s  political culture. A Junk News Aggregator also tracks fake stories spreading on Facebook.

Other agencies analyse monitor social media around elections in Africa, Eastern Europe, and the Americas. Here’s one example:

Getúlio Vargas Foundation – Digital Democracy Room

 

(Brazil)

During the 2018 elections, DDR tracked data from Twitter, Facebook, and YouTube to analyse bot activity and international influence. Their Twitter analysis was facilitated by the ease of API access. DDR’s analysis was hampered by lack of access to data from WhatsApp, increasingly popular in Brazil.

Here in Canada, KI Design, a big data analytics solutions and research firm, built big data analytics to detect and identify dis- and misinformation around the 2019 Canadian federal election.

KI Design

 

(Canada)

KI Design utilized full firehose access to Twitter, as well as posts and comments on online news, blogs, forums, social networks, Facebook pages, Reddit, Tumblr, and YouTube. Using AI analytics and classification mining, we were able to identify disinformation and misinformation sources, discourses, content, and frequency of posting. We mapped relationships between various disinformation sources and within content.

This forthcoming series will dig deeper into how to monitor electoral disinformation, and the different issues and challenges involved.

Part of a 5-part series on

Monitoring Online Discourse During An Election:

PART ONE:  Introduction

PART TWO:  Identifying Disinformation

PART THREE:  Managing Operational issues

PART FOUR:  KI Design National Election Social Media Monitoring Playbook

PART FIVE:  Monitoring Political Financing Issues

[1] Staff, “Artificial Intelligence and Disinformation: Examining challenges and solutions,” Modern Diplomacy, March 8, 2019: online at: https://moderndiplomacy.eu/2019/03/08/artificial-intelligence-and-disinformation-examining-challenges-and-solutions/.

[2] Open Society, “Experiences of Social Media Monitoring During Elections: Cases and Best Practice to Inform Electoral Observation Missions,” May 2019; online at: https://www.opensocietyfoundations.org/publications/social-media-monitoring-during-elections-cases-and-best-practice-to-inform-electoral-observation-missions.

[3] European Parliamentary Research Service, Regulating disinformation with artificial intelligence, March 2019: online at: https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624279/EPRS_STU(2019)624279_EN.pdf.