Incident Update 4︱Spot the Bot: The Presence of Suspected Bots on Canadian Politician Accounts

Authors: Sara Parker, Chloe Kemeni, Julian Lam

While we continue to study the bot incident related to Pierre Poilievre’s rally in Kirkland Lake, we also aim to provide additional context about bot activity in the Canadian information ecosystem. Specifically, we wanted to know to what extent bots were present in commentary directed at other politicians, and how these bots tend to engage across politicians and platforms. To do so we examine potential inauthentic coordinated activity on the Facebook and 𝕏 accounts of Leader of the Official Opposition Pierre Poilievre and Prime Minister Justin Trudeau to better understand what bot activity looks like in regards to prominent Canadian politicians. 

Key takeaways: 

  • Bots are not a new problem: they are, and have been, active on the Facebook and 𝕏 posts of both Pierre Poilievre and Justin Trudeau. 

  • It is nearly impossible to determine with certainty if an account is a bot. 

  • Bots may be more common on 𝕏, but harder to identify on Facebook. 

  • Suspected bots do not appear to influence the conversation in any way. 


In light of the Kirkland Lake bot incident, we investigated the extent to which bots are consequential for the Canadian information ecosystem by evaluating the presence of suspected bots on the posts of high-profile Canadian politicians. We began with finding posts on Facebook and 𝕏 by Pierre Poilievre and Justin Trudeau that had an above-average number of comments, under the assumption that potential bot activity would be found on posts with high engagement and a high volume of comments. We were specifically interested in potential bot activity within comments/replies to be able to understand what kind of bots may engage with Poilievre and Trudeau and how their phony content may affect conversation among real users. In total, we evaluated hundreds of comments on 171 posts (53 on 𝕏, 118 on Facebook) from March to July 2024 from both leaders. 

Bots on both Facebook and 𝕏 are difficult to spot, as they are designed to appear like authentic users. It is usually impossible to accurately discern a bot from a real account unless you can identify coordinated activity among accounts, such as resharing nearly identical posts at the same time (as with the Kirkland Lake bots). For this reason, we can only suspect individual accounts of being bots. That being said, we found many bots shared similar characteristics on both platforms: 

  • The accounts may have been made very recently, such as within the past month or two. Some bots, however, are accounts that were created by authentic users years ago but have since been taken over. 

  • Potential bots will either not follow many accounts, or will follow thousands.

  • Potential bots may reply with comments irrelevant to the content of the original post. 

  • Accounts will not have a profile picture of a real person: it may be an AI-generated or drawing of a person, a meme, a random picture, or have no profile picture at all. 

  • The content they post may be highly inflammatory: often it may be outwardly racist, sexist, or extreme as a way of soliciting angry engagement. This is known as “ragebait.” 

  • Much of the content on bot accounts may be very political, but random – often resharing AI-generated content, adding a variety of hashtags, repeating themselves across multiple posts, or seemingly resharing as many posts as possible. 

In addition to trying to assess the presence of bots, we explored the nature of their engagement – specifically how they engaged with the two politicians. While we were looking for potential bot activity on Facebook and 𝕏, we paid attention to posts that were particularly inflammatory, irrelevant, or intuitively did not feel like it was written by a real person. We then further inspected the accounts to check if they may indeed be bots or not. 

We found evidence suggesting the presence of bots in the comments on 94 posts reviewed (52 on 𝕏, 42 on Facebook) across both Poilievre’s and Trudeau’s accounts. 98% of X posts and 35% of Facebook posts had potential bot activity in the comments. Facebook’s comparatively low rate of potential bot activity may be because Facebook is particularly good at removing bots, identifying potential spam comments and pushing them to the bottom of the comment section, or simply because Facebook has fewer bots. Meanwhile, the high rate of potential bots found on 𝕏 is likely due to the persistence of a tried-and-true fact of the platform (even when it was still called “Twitter”): bots are everywhere, and they are nearly impossible to prevent. 

The potential bots on 𝕏 that interacted with Poilievre and Trudeau were generally selling cryptocurrency or pretending to be online sex workers, while others were overtly political and posted very angry content. These posts were often nonsensical: either they were too short (e.g., simply the word “liar”), a repost of an otherwise irrelevant post (e.g., Jezuz Heist), or just absurd (e.g., RadioTown’s McDonald’s rant). In almost all cases, we found that the bots did not attract very much engagement: maybe most users also intuitively picked up that this was not authentic activity, or they simply did not care. Either way, it does not seem that the suspected bots on the 𝕏 accounts of Poilievre and Trudeau are successful in influencing the conversation on the platform directly.

We observed a different situation on Facebook. While we saw some “spam” comments that were not related to the content of the original post, we also found comments that were likely posted by bots but received many reactions. Notably, comments by potential bots that praised one of the leaders – e.g., “Pierre Poilievre is the next prime minister of Canada” – received hundreds of likes and “heart” reactions. Others, such as the example provided below (interestingly by a bot with the same name), who praised Trudeau received many “laughing” reactions. It is difficult to gauge how many of these reactions were authentic support for the comment by the potential bot, or if the “laugh” reactions were ironic – users recognizing that the comment was inauthentic and laughing at it. We often observed duplicates of these comments across multiple posts, but posted by different users, suggesting inauthentic coordinated activity on Facebook. While these comments received reactions, indicating that real users were interacting with the potential bots, they did not often receive replies and so did not appear to influence the conversation about the politicians on Facebook.

In our exploration of the comment section of some of Pierre Poilievre and Justin Trudeau’s posts on Facebook and 𝕏, we identified many potential bots. While we cannot be certain that these accounts were truly bots, we do know that bots are active and pervasive across 𝕏 and Facebook, as well as other social media platforms that Canadians use to engage with politics and politicians. This is essential context through which to understand the Kirkland Lake bot incident. While bot campaigns are frightening to watch unfold, bots are a fact of life on social media platforms and cannot be easily avoided or prevented.

Previous
Previous

Incident Update 5︱Survey Findings : Kirkland Lake Bot Incident

Next
Next

Incident Update 3︱Exploring incident replicability using commercial AI tools