Weekly Update 3 | Friday, April 11

The weekly update on the Canadian election provides information on trends and dynamics observed in the information ecosystem (Snapshot), identifies and shares insights on information-related incidents and emerging threats (Incidents), and shares educational content, research findings and other relevant outputs from our Coalition on Information Ecosystem Resilience (Update). All facts and figures are taken from an original survey and social media data collection by the Media Ecosystem Observatory and the analysis reflects the period from April 4 to April 10.


SNAPSHOT

GenAI and the Election: What’s Fake, What’s at Stake

In this year’s federal election, the Canadian information ecosystem has been confronted with an emerging challenge: generative Artificial Intelligence (GenAI). Generative AI is differentiated for its ability to generate new content by learning patterns from large datasets. GenAI tools, like OpenAI’s ChatGPT or Google’s Gemini, can quickly (and cheaply) produce synthetic text, images, video, or audio that appear authentic. While GenAI can be a useful tool, it can also be used to create photos of people in places they have never been, or videos and audio of people saying or doing things they have never said or done. The rapid proliferation of generated AI content has made the information ecosystem more vulnerable to disinformation, as it can be much harder to distinguish what is real from what is fake. In this week's snapshot we highlight the extent to which this content is prevalent across the Canadian information ecosystem and provide information for Canadians struggling with this kind of content.

Since the start of the campaign, we have observed numerous cases across social media where genAI has been used to generate fake images, videos, and voiceovers, as well as fake news articles.

This AI content is increasingly difficult to avoid: it can be easily and quickly created, leading to high numbers of genAI posts, and malicious actors are amplifying their reach by paying platforms to circulate them further. While genAI content can be used to spread disinformation or increase confusion, our “on-the-ground” investigation of Facebook, Instagram, TikTok, and X revealed that most posts containing genAI that Canadians come across are timely memes, usually reacting to recent events and often mocking Canadian and American politicians across the political spectrum. 

Despite the scale of GenAI content we’ve been capturing, Canadian concern about GenAI has not shifted during the election. Online, we observe very minimal discussion about genAI (and AI in general) by influential voices in the Canadian information ecosystem. What discussion there is concerns potential applications of genAI, like “deepfakes” (AI-generated videos), bots (artificial accounts posing as authentic users), trolls, and dis/misinformation. These topics are usually discussed by partisan influencers, largely to critique and accuse politicians of using them and to sow doubt regarding the truthfulness of certain content. We continue to monitor the spread of and engagement with genAI content on social media, and will provide updates as the election continues.

Our survey data shows Canadians remain moderately to very concerned about the impact of misleading information influencing the outcome of the election. When asked specifically about GenAI’s ability to mislead fellow Canadians, concern grows, particularly among older Canadians.

We caution Canadians to be aware of genAI content: always try to find additional sources before trusting information you find online, and pay attention to potential anomalies or things that don’t look right in videos. If you see something, say something – you can report it to our tipline, and we will investigate it further.


INFORMATION INCIDENTS

During the election, we report on information incidents that could mislead the public and disrupt the democratic processes. Generally we are concerned with covert information manipulation and foreign interference efforts as opposed to instances of influence which are definitionally overt and public. This week, we have identified four new minor incidents, have escalated one previously closed incident to moderate, and report back on three minor incidents from last week, closing two of them. Click here to learn more about our incident response thresholds.

NEW INCIDENTS

WeChat posts from state-directed media focused on the Canadian election 

Classification: Level 1 - Minor Incident

Earlier this week, and for the first time ever during an election, SITE flagged a case of foreign interference, highlighting several concerning posts on WeChat that “intended to influence Canadian-Chinese communities in Canada”. SITE flagged posts on the Youli-Youmian account dated from March 10 and March 25th. There has been notable concern regarding this incident, including from many Members of Parliament and candidates. Conservative candidate Michael Chong, previously targeted by Chinese foreign interference, responded with a statement alleging China is spreading information to support the Liberal Party. Other responses claim Carney is financially beholden and compromised to the government of China. These claims have received widespread attention in both traditional and social media, with continued attention on foreign interference in the campaign. First, we note that the intelligence flagged by SITE that links the ownership and editorial focus of the channel to the People’s Republic of China (PRC) Chinese Communist Party’s Central Political and Legal Affairs Commission (CPLAC) should be integrated into international lists of state-linked entities (e.g. EEAS SG.Strat).

We have conducted an investigation into this channel and these posts and, based on the data available to us using open-source methods, conclude that the content and engagement with the Canada-focused material is typical of this channel. We find no cause for alarm or sign that China has materially interfered in the Canadian election using this channel. 

For our investigation we reviewed all content posted by the channel in the month of March. We extracted their engagement metrics and country of focus as well as evaluated their contents and editorial position. We ask: 

  1. Did the content of the channel disproportionately focus on Canada?

  2. Was the content inconsistent with the existing editorial approach of the channel?

  3. Did the content focused on Canada receive disproportionate engagement?

  4. Is there any evidence of impact?

Did the content of the channel disproportionately focus on Canada?

An analysis of the Youli-Youmian channel’s content shows that Canada-related coverage was minimal: of 93 posts on the channel in March 2025 only 4 posts (4.3%) of in March 2025 focused on Canada. This is compared to 58.1% that focused on the United States, along with substantial coverage of Ukraine (15.1%), Korea (7.5%), Taiwan (6.5%), and the EU (5.4%). Canada was not a central focus for the channel or its audience during March 2025. The Canadian election was likely covered due to its significance as a major international event, rather than the result of a targeted influence campaign.

Was the content inconsistent with the existing editorial approach of the channel?

The channel focuses on providing information on international affairs with a strong and critical focus on the United States. In the articles focused on Canada, there is no clear evidence of disinformation, misleading narratives, or manipulative framing. The language used appears factual and does not clearly favor any particular party or candidate. The comments allowed to be posted under the articles do not favour one party or another. Based solely on the textual content, we find little evidence to suggest an intent to influence political outcomes in Canada.  

Did the content focused on Canada receive disproportionate engagement

Of particular concern was the volume of engagement on these posts, reported as higher engagement than that received by posts of major state media outlets such as the People’s Daily. The figure below shows the four Canadian stories along the distribution of engagement for all March posts on the channel. When compared with other posts published during the same period, the Canadian content did not attract unusually high engagement, and overall level of engagement was consistent with the channel’s popularity. The post with the most engagement (March 10th) was in the 77th percentile for engagement: it received more engagement than 71 other posts but less than 21 other posts. Engagement with the Canadian stories was not, on average, higher than that of these other countries and was significantly lower than Korea, Ukraine, and the United States. There is no indication that these posts stood out in ways that would suggest targeted amplification or exceptional audience interest.

We were unable to evaluate whether the post on March 25 was “amplified in a coordinated and inauthentic way by a group of 30 smaller WeChat accounts that boosted the discoverability of the posts”, but assess such a thing to be possible. Such amplification may be routine for the channel or may indicate an attempt at foreign interference. That post, however, received only average engagement on the channel overall and was likely seen by few in Canada.

Is there any evidence of impact?

The actual influence of these articles on Canadian voters - particularly Chinese Canadian voters - is uncertain. The channel primarily targets domestic Chinese readers, and there is limited evidence that these posts were widely circulated within Canadian social media ecosystems. Even if some Canadian users encountered the content, we believe it  is speculative to assume that such exposure would significantly influence their vote decisions, especially given broader factors like personal experiences, social networks, and domestic Canadian news sources.

We will continue to monitor official, state-controlled, and state-linked news channels throughout the election and report back on any unusual behaviour.


Claims of elections fraud

Classification: Level 1 - Minor Incident

There is a rise in claims of election fraud against Elections Canada. Narratives include: allegations that there was an organized campaign by Elections Canada to allow for a long list of independent candidates in Pierre Poilievre’s riding to “split the vote”; claims that the election will be rigged; and claims that Dominion voting machines are not secure (Elections Canada does not use any automated ballot tabulators  or electronic voting). While unfounded allegations against Elections Canada have been present in small doses since the campaign began, there is an increase in posts questioning the independence and integrity of the electoral process, as well as increased engagement with that content. These types of allegations are routine in recent elections. We will continue to investigate the circulation and amplification of narratives regarding the integrity of the election to identify if and how this conversation escalates.

Deepfakes of Canadian politicians 

Classification: Level 1 - Minor Incident

Deepfake videos of Canadian politicians have started to emerge more frequently and are proliferating throughout the ecosystem. In response, we are actively monitoring a list of accounts that have shared AI generated content. One example falsely presents Mark Carney saying that he does not speak French and that language does not matter, but the video’s background and audio are actually from Chandra Arya’s interview when he launched his campaign for the Liberal Party leadership. While some of this content is clearly AI-generated (e.g. appear fake or edited, people in the videos have inconsistencies), this incident is concerning given these posts continue to garner attention and generate engagement (some have more than a hundred thousand views). We will continue to track the origins and spread of genAI content of Canadian politicians. 

American-Canadian advocacy for 51st statehood

Classification: Level 1 - Minor Incident

While we closed our investigation into 51st state Facebook groups last week, this week we flag a related incident. A newly launched website, “51st State Protectors of the Oath,” has intensified online debate about Alberta’s potential secession from Canada and annexation by the US. The initiative urges Albertans to become politically involved through voting in a statehood referendum, and participate in a “G7 Convoy” from Edmonton to Calgary in June. This campaign notably intersects with US leadership and influencer efforts to normalize the idea of Canadian annexation and statehood.

The meshing of domestic tensions with strategic American digital campaigns is impacting political discourse at a critical moment in Canadian politics. These initiatives may amplify existing divisions in Canada and challenge the broader sense of Canadian sovereignty. We continue to monitor these campaigns.

ONGOING INCIDENTS

AI-generated fake news 

Classification: Escalated to Level 2 - Moderate Incident

AI-generated content masquerading under legitimate news sources continues to spread across Facebook and Instagram. We had closed this incident last week as ads appeared to be driven by financial rather than political aims. This week we are reopening and escalating this incident after seeing an increase in volume and content of posts. The nexus of this activity is on Facebook where there are more Facebook Pages (more than 25 different pages since the beginning of the campaign, including Canada Votes, National Digest, The Broad Maple, The Maple Current, Canada Insider, Parliament & Perspectives, College Enter, 25k Podcast, among others) pushing multiple ads daily as news articles promoting scams. While previous headlines were explicitly financially-focused, content is becoming more political. For example, headlines are considerably more partisan (e.g. “Breaking: Carney Destroys Poilievre’s Argument in Seconds”). Also, AI-generated content is becoming more sophisticated. We now see minutes-long deepfake videos, featuring AI-generated CBC journalists and fake press conferences featuring Mark Carney. 

Beyond the number of pages and sometimes high levels of engagement (some posts have more than a thousand reactions), our survey also indicates that this content is widespread, with 25% of Canadians indicating that they have encountered “a social media post or webpage that falsely presented itself as a legitimate news source (e.g., CBC, Toronto Sun, La Presse) by imitating its name, logo, or design” in the past month. 59% of exposed Canadians said that they immediately recognized the content as fake, something we can also observe when looking at the comments on the sponsored posts. We urge Canadians to be aware of the proliferation of AI-generated content on social media, particularly those masquerading as legitimate news content. Users in Canada cannot see content from official news outlets on Facebook and Instagram.

False Carney-Epstein connection promoted by potential bots 

Classification: Level 1 - Minor Incident

Allegations of a connection between Carney and Epstein have been discussed by prominent voices in the information ecosystem since January.  In the incident opened last week we identified suspicious activity amplifying these claims. The most suspicious 2 accounts pushed the claim that Mark Carney had ties to Jeffrey Epstein by commenting the same message 564 and 44 times in 48 hours (7th and 8th of March). Both accounts were responsible for 1,028 and 873 posts over the month of March, mostly replies and always repeating the same message, however they received few views.

This week we identified 195 accounts involved in amplifying two news articles and one Twitter post, all accusing Carney of having a relationship with Epstein. These accounts posted the post or article headline with little to no variation. Of these 195 accounts, 5 overlapped with accounts which our coalition partners at ResetTech identified as bots and many others looked suspicious. These amplification efforts were not impactful on the overall visibility of the content.

We further analyzed the entire discussion about the allegations against Carney on X. We found that original posts about the topic usually cited questionable sources (like YouTube, thegatewaypundit, or Rumble), while some users further spread the narrative by replying or quote-tweeting actual news outlets like the National Post, the Toronto Star, and CBC News. We also noticed that the story first appeared on X and Instagram in January, and was discussed by political influencers and commentators. 

We assess that this incident is more likely a case of influential, authentic accounts pushing a particular narrative and suspicious, potentially inauthentic accounts having a small role in spreading it, rather than a large-scale coordinated bot campaign spreading disinformation. We will continue to monitor these accounts and claims.

CLOSED INCIDENTS

Suspicious accounts on X 

Classification: Level 1 - Minor Incident

While we continue to see suspected bot accounts on X, we are closing this specific incident. Early in the election we observed many suspicious accounts and will be flagging on a go-forward basis when these accounts engage in information manipulation in a coordinated and impactful way. We continue to monitor upticks in engagement with specific narratives and topics, and investigate their origins but have no conclusive evidence of a coordinated campaign targeting the Canadian election. We are closing this incident but encourage Canadians to be vigilant, consult educational resources on bots, and report suspicious accounts using our tipline.

 

If you see something, say something: if you see suspicious content online related to the Canadian election that you think is indicative of someone attempting to manipulate or mislead Canadians, take a screenshot and send us a tip via our tipline. A researcher will review each one.

 

COALITION RESOURCES

DFRLAB

Haven’t had time to read the final report released by the Public Inquiry into Foreign Interference?  The DFRLab has it covered for you with their report “An existential threat: Disinformation ‘single biggest risk’ to Canadian democracy”. You can read it here.

DISINFOWATCH & CANADA’S DIGITAL PUBLIC SQUARE

DisinfoWatch and Canada’s Digital Public Square have released “Democracy and You: A Handbook for Detecting and Preventing Foreign Interference in Canadian Elections.”This practical guide empowers Canadians—voters, journalists, activists, and political parties alike—to recognize and counter foreign interference, both during elections and in between. You can find it here.

POLCOMMTECH LAB

“Influencers and Elections: The many roles content creators play” is a new report by the PolCommTech Lab that explores how social media creators shape narratives, interpret news, and impact voter behaviour—while also sometimes spreading mis- and dis- information that is incorporated into foreign interference campaigns, and used to evade existing laws and regulations impacting elections. You can read it in English here and in French here.

SAMARA CENTRE FOR DEMOCRACY

A week ago, the Samara Centre for Democracy launched Verified, a project that helps Canadians navigate online election content across different platforms. Each week, they deliver a snapshot of the online election conversation and  you can sign up to get them here.


COALITION MEMBERS IN The Conversation

AI is making elections weird: Lessons from a simulated war-game exercise

  • Read the article by the Applied AI Institute in The Conversation here.

Foreign interference threats in Canada’s federal election are both old and new

  • Read the article by the Centre for the Study of Democratic Institutions in the Conversation here.


NEW EXPLAINER VIDEO

As the spread of information manipulation increases, protecting the integrity of our information ecosystem is more important than ever. This is why the Coalition (made up of the Media Ecosystem Observatory, the  Canadian Digital Media Research Network (CDMRN) and broader academic and civil society partners), are actively monitoring and responding to information manipulation during the 45 election. Watch this video to get a better sense of how we do it.

THIS WEEK FROM THE TIPLINE:

  • We have received a total of 155 submissions during the election period including 33 new submissions this week. 

  • 66% of the new submissions involve posts on Facebook, 24% involve posts from TikTok and Youtube and approx. 10% were website submissions. 

  • The  submissions form this week focused on these dominant themes*: 

    • Political misinformation or disinformation.

    • Election misinformation websites and scams.

    • AI-generated deep fakes featuring political figures (e.g., Mark Carney).

    • Concerns about public manipulation via social media.

    • Misinformation about Mark Carney and Pierre Poilievre.

    • Misrepresentation of legislation (e.g., Bill C69).

    • Foreign influence (e.g., China).
      *Note: The topics submitted reflect common user concerns; while some may lead to potential incidents within the media ecosystem, not all necessarily indicate imminent threats or events.

 

See something online? Say something!
Let us know via our
Tipline.

 
Previous
Previous

Mise à jour hebdomadaire 4 | Vendredi 18  avril

Next
Next

Mise à jour hebdomadaire 3 | Vendredi 11  avril