Incident Debrief ︱August 3 Bot Activity on 𝕏 Related to Rally in Kirkland Lake

On July 31, 2024, Conservative Leader Pierre Poilievre held a rally in Kirkland Lake as part of a tour of Northern Ontario. Three days later, on August 3, hundreds of 𝕏 accounts posted about the event, claiming to have attended the rally and using language like “buzzing with energy” and “as a northern Ontarian.” The timing, character of the messages, and features of the accounts quickly raised widespread speculation about the origin of these messages and whether a specific political party was responsible. Based on account descriptions, locations, and previous tweeting behaviour, there has been speculation as to whether the posts were produced by a foreign entity interested in meddling in Canadian politics. The Canadian Digital Media Research Network responded to this event with an investigation that produced six rapid incident updates. We conclude that investigation with this debrief.


Incident Assessment 

This minor incident was likely caused by a single entity or actor using a set of newly created bot accounts with posts composed by either a low-quality (cheap) or poorly prompted generative Large Language Models (often described generically as AI) [Incident Update (IU) 6]. This network of bots consistently posts about recent news topics and are only incidentally interested in Canada or Canadian politics content [IU6].

As very few Canadians saw the original posts, the direct impact of the bot activity was negligible [IU5, IU6]. The subsequent discussion of the incident, however, garnered millions of views on 𝕏 and likely millions more through amplification by traditional Canadian media [IU2]. Ultimately, we estimate that several million Canadians heard about the incident [IU5].

The millions of views on 𝕏 were of posts that largely used the event to attack the Conservative Party and Poilievre for attempting to mislead Canadians about his popularity. Nearly half of the Canadians who have heard of the event believe a political party or partisan individuals were responsible. Of those thinking it was a political party, the vast majority believe it to have been the actions of the Conservative Party [IU5]. Despite this significant speculation and associated accusations, we find no evidence that indicates a political party or foreign entity employed this bot network for political purposes.


Lessons Learned

The Kirkland Lake bot incident should serve as a wake-up call. The event is best thought of as a test-case or capacity-building exercise by some entity interested in developing the ability to mass produce posts on social media platforms using a semi- or fully-automated data pipeline that incorporates current events and news. While this incident was minor and never posed a major threat, the democratic implications of a more sophisticated effort are significant and should be carefully considered in current and ongoing prevention and mitigation efforts. The incident highlights three significant vulnerabilities. 

1. Current technology supports rapidly scalable information operations.

A powerful combination of cheap (both in terms of time to activate or money to purchase) bot accounts and the accessibility of generative AI allow information operations to scale rapidly and respond dynamically to political debates and events. The tools are now available to mass produce persuasive and completely artificial content and circulate it widely, and there are few guardrails in place (IU3). We assess that even a modestly technically capable and resourced actor could launch an effective operation with relative ease.

2. A lack of cooperation and transparency from platforms makes us far more vulnerable. 

Bots are highly prevalent across social media platforms, particularly 𝕏, and consistently engage in political conversation [IU4]. While this bot activity does not appear to have major direct impact, it is vital that platforms act responsibly and independent researchers are given access to continuously observe, analyze, and monitor bot activity. As of publication of this debrief, 𝕏 has not provided any information about any active or passive measures taken to limit the bot activity or identify the parties responsible, despite a request from several Members of Parliament. Despite their role as stewards of the de facto public squares, social media companies appear to be largely unequipped to address or simply unconcerned with the threat of information incidents in a meaningful way.

Given that social media platforms have repeatedly demonstrated a reluctance or inability to quickly and effectively respond to bot operations, there is an urgent need for independent and high-quality data access for researchers and journalists to better respond to incidents. The continued erosion of transparency and data access by 𝕏, Meta, and TikTok is enormously problematic. Despite this, few Canadians perceive data access as a necessary part of a policy response [IU5]. While the Research Network has been able to provide analysis in this instance, there are clear limits to what is possible without data transparency from platforms. With 98% of the 300 accounts captured in screenshots deleted, suspended, or banned [IU6], independent external research can only go so far. 

3. The way our media and politics talk about information operations makes the problem worse.  

The rapid instrumentalization of the Kirkland Lake bot incident to engage in partisan politics highlights a persistent gamesmanship in Canadian political discourse that threatens to amplify the impact of information operations. In the week following the bot activity, the bulk of the discourse online focused on partisan blaming based on inferences and supposition of motives (IU6). Evidence was remarkably absent from these accusations, including those made by elected politicians and political parties and then amplified by media organisations.

We do not have the necessary relationships across parties and between watchdogs and social media giants to effectively and decisively intercede in events like these. We are not prepared to responsibly contextualize, discuss, and reflect upon incidents where bots, generative AI, or foreign influence may be involved. 


Threat Assessment

Taken together, these vulnerabilities could enable  a more concerted effort to manipulate Canadians at scale. While a wide range of operations is possible, a more sophisticated version of the Kirkland Lake bot incident could:

  • Engage in targeted harassment: Use bots to flood politicians, journalists, or activists with abusive or threatening messages, aiming to intimidate, silence, or push them off platforms, similar to documented efforts in previous disinformation campaigns (i.e., actions similar to the spamouflage campaign of 2023 or a more general well documented silencing of public figures).

  • Discredit public figures: Generate and circulate deepfake videos, audio, or other forms of synthetic media that falsely portray politicians, journalists, or other public figures in compromising situations, aiming to damage their credibility and influence (such events have begun to occur across social media platforms in other western democracies).

  • Hijack important democratic moments: Launch coordinated campaigns during key political moments, such as elections or major policy debates, to shift the narrative or distract the public from significant developments, steering attention toward less relevant but emotionally charged issues (e.g., such as that seen during #TrudeauMustGo

  • Impersonate trusted sources: Create and deploy bots that mimic the language and style of trusted Canadian media outlets or public figures, spreading disinformation that appears credible but ultimately undermines public trust in individuals, institutions or elections (e.g., impersonations of CBC)

  • Sow distrust and detachment: Widespread use of bots and highly frequent information operations can give the sense that no information should be trusted. This can lead Canadians to tune out or have even less trust in established democratic entities and processes. Future efforts with these kinds of bots might not even be technically more sophisticated, but at scale could further erode trust in the information environment which can have negative consequences for democratic participation and trust in democracy and elections more broadly.


Learn more

IU6: Bots and LLMs
Core quantitative analysis of the bot incident on X by MEO and the Network Dynamics Lab. Found that there may have been as many as 7000 recently-created bots involved in the Kirkland Lake incident, all posting messages likely generated by an LLM crafting messages responding to recent news stories. Views of the bot content were negligible, with subsequent discussion and accusations amplifying the incident’s impact. 

IU5: Survey Findings: Kirkland Lake Bot Incident
Findings from a survey on awareness of and opinion about the Kirkland Lake bot incident reveal that most Canadians are unaware of the incident but are generally concerned about foreign interference and generative AI. 

IU4: Spot the Bot: The Presence of Suspected Bots on Canadian Politician Accounts
Researchers from MEO identified potential bot activity on the Facebook and X accounts of Pierre Poilievre and Justin Trudeau, underscoring the prevalence of bots across social media platforms and the difficulty of accurately identifying them. 

IU3: Exploring incident replicability using commercial AI tools
Researchers from Concordia University and the University of Ottawa found that large free commercial AI tools can be used to launch bot campaigns similar to the Kirkland Lake incident, raising concerns about the lack of guardrails around generative AI. 

IU2: More Bot than Bite: A Qualitative Analysis of the Conversation Online
Qualitative analysis of discussion about the incident across social media platforms revealed that news outlets and NDP MPs were the “superspreaders” of the story, while the incident appears to have had little impact on Pierre Poilievre’s online engagement. 

IU1: Bot Campaign most likely the work of an amateur, reports CDMRN partner The Social Media Lab
Based on an early manual review of the incident, the Social Media Lab at Toronto Metropolitan University found a small, amateurish, and unsophisticated operation.

For media inquiries, please contact Isabelle Corriveau at isabelle.corriveau2@mcgill.ca.

Previous
Previous

AUGUST 2024: WHAT’S ALL THIS BUZZ ABOUT BOTS?

Next
Next

Incident Update 6︱Bots and LLMs