AUGUST 2024: WHAT’S ALL THIS BUZZ ABOUT BOTS?

Each month we provide a visualization of the CIE that highlights key findings. In light of extensive research into the surge of bot activity on X surrounding Pierre Poilievre’s visit to Kirkland Lake in August, we decided to examine the larger role bots play in the Canadian information ecosystem. (You can read our full analysis of the Kirkland Lake bot incident here.) 

Bot” simply refers to a software application that does repetitive tasks online, while ‘social bots” are specifically built to communicate on social media platforms, often posing as authentic accounts and real users to widely distribute certain content. We can think about bots as the bees of an information ecosystem: they’re everywhere, but only some of them will actually bother you. Like bees in a hive, bots are usually part of a network (“bot networks”), which is activated by a central controller (like a queen bee) to increase the visibility of a message, inflate engagement with topics or other accounts, or scam users into buying a fake product or service by posting similar content many times or engaging with the same posts. The majority of bots you encounter likely fall into this last category (e.g., crypto vendors, porn bots) but you may have also encountered political social bots – and not know it. These bot accounts attempt to mislead social media users by engaging with political topics, repeating similar sentiments multiple times across multiple accounts to give a false impression about public opinion. Bot networks used for political purposes can quickly spread disinformation, making them a prominent vulnerability on social media and in the CIE. 

Most social platforms allow and enable the creation of inauthentic, automated accounts for harmless purposes, as long as they are publicly labelled as bots and cannot be confused with real users. Like pollinator bees, many of these bots are good additions to the ecosystem: some are used by news organizations to automatically provide updates on key events, while others may share snippets of poetry or cute cat videos. However, this also makes it easier for nefarious actors to create large bot networks without being flagged - since social bots are designed to behave like real users, it is increasingly difficult for social media companies to accurately identify and remove them. As with a swarm of bees, it’s hard to filter out the angry, stinging wasps. Facebook usually takes down over one billion bot accounts each month but another billion soon take their place, while bots on X are notoriously a growing problem for the platform. With the rise of generative artificial intelligence and the proliferation of cheap, user-friendly Large Language Model (LLM) tools like ChatGPT, it has only become easier to create large bot networks that behave like real users. Thanks to LLMs, it is much easier to give all the accounts in a bot network similar – but crucially, not identical – messages to spread, giving the impression that the accounts are not coordinated and making it harder to identify them as bots. This may also make it easier for foreign adversaries to launch bot campaigns, spread disinformation, and influence Canadian elections. 

So, what are perceptions of bots? Are they considered to be a threat to democracy? How does the fear of bots compare to other digital threats? Who is responsible for controlling this bot problem? This month, we found that the majority of Canadians (60.2%) think bots are an effective tool to mislead the country and impact public opinion.  When asked to rate threats to Canadian democracy out of 10, Canadians ranked bots lower than generative AI and foreign influence – bots were ranked 6.17 versus 6.68 (genAI) and 7.16 (foreign influence) out of 10. Canadians also prefer the government to investigate bot incidents rather than compel social media platforms to do so: when asked how they think the country should respond to the Kirkland Lake bot incident, most Canadians wanted the RCMP (54.4%) or the Election Commissioner (57.8%) to investigate, while less than half (43.4%) thought that social media companies should publish transparency reports about bot events. 

Like bees in our natural ecosystem, bots are inherent to the information ecosystem. Different bots have different objectives, and it’s difficult at first glance to distinguish the harmless ones from the dangerous ones – until you get stung. As bots become more sophisticated and prevalent online, it’s important that Canadians remain vigilant and not mix up their pollinators. 

Important note for this month: 

This month’s Situation Report does not include data from Facebook due to Meta’s decision to shut down Crowdtangle, a research tool for monitoring the platform. All measures using social media data reflect month-to-month changes with all Facebook data excluded.

Key findings:

  • Canadians want the government to get involved in the fight against bots.

  • The ecosystem has become slightly more polarized.

  • Engagement with misinformation and concern about fake content increases.

Previous
Previous

Information Incident Notification︱ Russian Funding of US and Canadian Political Influencers 

Next
Next

Incident Debrief ︱August 3 Bot Activity on 𝕏 Related to Rally in Kirkland Lake