Social media platforms host and profit from scams using AI and fake news websites during Canada’s 2025 federal election
Author: Mathieu Lavigne, Senior Analyst, Alexei Abrahams, Mahnan Omar, and Aengus Bridgman, Media Ecosystem Observatory
Key takeaways:
Fake news websites are increasingly sharing political content to deceive and scam Canadians: since the start of the election campaign, we have identified over 40 Facebook pages sharing (AI-generated) ads masquerading as legitimate news sources and linked to cryptocurrency scams. While similar schemes circulated earlier this year, the ads have shifted in nature during the campaign—becoming more sophisticated and more politically polarizing, and increasingly relying on videos designed to look like real news coverage.
Canadians are seeing this content at scale: about a quarter of Canadians (24%) indicate that they have encountered “a social media post or webpage that falsely presented itself as a legitimate news source (e.g., CBC, Toronto Sun, La Presse) by imitating its name, logo, or design” over the past month.
Canadians are skeptical but vulnerable: About three quarters of the comments on the ads flag that the content is fake and about 60% of Canadians who believe they have seen them say they immediately recognized them as false. The remaining 40% admit that they only recognized this deception later and may have believed or uncritically engaged with the content.
Immediate action is required: Social media platforms need to dramatically scale up their ad moderation and transparency practices to detect and remove deepfakes and imposter news content. To profit from deceiving and scamming Canadians during an election is simply unacceptable and regulators and lawmakers need to respond immediately in the absence of dramatic action by the platforms.
Introduction
Since the beginning of the 2025 Canadian election campaign, we have observed a large number of Facebook ads impersonating legitimate news sources such as CBC, CTV, and the Toronto Sun, while promoting fraudulent investment schemes. These ads, which often feature fake news articles or deepfake videos styled as news segments, are being bought in a context where news content is banned from Meta’s platforms in Canada. While similar scams have been reported in Canada and around the world over the past months, the scale and political nature of these ads have intensified and appear even more problematic in the context of an election campaign as they could influence public perceptions of the parties and leaders.
This report evaluates the origin, characteristics, dissemination, effects, and response related to these ads by addressing the following questions:
How does this scam differ from others that have circulated in recent months? Is the problem getting worse? How much are Canadians seeing and interacting with the content?
Could this content have any influence on the upcoming election?
Are Canadians generally able to detect that the content is fake?
What techniques are being used to spread the content and defraud Canadians?
What actions are being taken by Meta to address these fake ads?
Since many of these ads do not self-disclose as political, they often do not appear in the Ad Library, which hampers our ability to assess the scope of this trend. We identified the ads and pages based on our team’s platform monitoring, tips from the public (using our tipline), and keyword searches on social media platforms (many Canadians are alerting others about these scams). We primarily focus on Facebook given that it is the platform where these ads appear to be the most prominent at the moment based on the three sources above. However, it should be noted that similar ads have been circulating on other platforms, including YouTube, Instagram, and X. This response builds on a qualitative investigation of the ads, the fake websites masquerading as news content, and the pages inviting Canadians to fill a contact form to start investing. It also includes a systematic analysis of 770 comments on the ads, as well as findings from a nationally representative survey of Canadians.
This report is not exhaustive and our investigation is ongoing. Notably, we will continue to evaluate who is behind these fake ads and websites and assess their impact, legality, and evolution.
How different is this scam from others that have circulated in recent months? Is the problem getting worse? How much are Canadians seeing and interacting with the content?
This scam appears to be a continuation of a trend documented earlier this year in Canada, and closely resembles similar schemes reported in other countries. In the previous campaign, similar ads showing fake sensational political headlines were promoted on X using newly created accounts impersonating various small businesses, alongside possibly hijacking dormant accounts. The ads’ links led users to websites with fake Canadian media articles. Comparable methods are being used, primarily on Meta’s platform, in the context of the 2025 Canadian election.
Since the beginning of the election, a wide range of names have been used for the fake investment platform (e.g., CanFirst, Capstone, TrueNorth, TokenTact). One name that continues to appear is Quantum AI, a platform used to defraud individuals in multiple countries over the past year using similar deceptive tactics. For example, a December 2, 2024 warning from the Central Bank of Ireland states: “Quantum AI uses AI-generated deepfake videos, fake newspaper articles and photos of high-profile people through social media to deceive the public into believing that this fraudulent entity is legitimate and to promote the activity of Quantum AI.”
Image 1. Fake CBC article introducing the Quantum AI platform
While the scheme is similar, we have reasons to believe that the supply of these imposter ads has increased since the beginning of the election. By combining tips from the general public, keyword searches on social media, and our team’s platform monitoring, we were able to identify more than 40 Facebook pages promoting these fraudulent ads, with new pages being created and identified every day. The regular stream of screenshots we receive from members of the public, combined with how easily we can find Canadian users publicly denouncing the ads via basic keyword searches across social media platforms, suggests that a large number of Canadians are being exposed to this content. In addition to fake news articles impersonating mainstream outlets, we identified seven deepfake videos of Mark Carney promoting the fraudulent investment platforms featured directy in the ads. These deepfakes typically mimic a CBC or CTV (two of the top news outlets in Canada) news broadcast.
It is difficult to measure the exact reach of these fake ads given that the Meta Ad Library (a tool provided by Meta to the public to evaluate ads being shared on their platforms) only offers data about the number of impressions for ads about social issues, elections or politics (most of the ads discussed in this report were not identified as political) and (non-political) ads get removed from the Ad Library when taken down by Meta. One of the Facebook pages we identified has ads labeled as political. A single page, while only offering a partial picture, can still provide insights about the scale. Money Mindset, which uses the logo of the CBC/Radio-Canada, bought five French-language Facebook Ads that were active from one to four hours between April 4 and April 9. One of the ads, featuring a deepfake video of Mark Carney, cost $300 - $399 USD (about $500 CAD) and received between five and six thousand impressions. In total, the five ads represent an investment of approximately 1,000 CAD and have received around 10,000 impressions. Ads from this page – one among more than forty – received a relatively low level of engagement compared to English-language ads from other pages. For example, multiple ads from College Enter, which is still active and posting new ads, received more than a thousand reactions (up to four thousands) and five hundred comments (up to 1,800), which is indicative of a substantially greater investment and number of impressions (a rough estimate of approximately 100,000 views for this post alone can be inferred from two recent publications that had privileged data access).
Beyond the potentially larger scale and reach of these ads, another difference from impostor ads published in the previous months is the political nature of the content in the context of an election, as detailed in the next section.
Could this content have any influence on the election?
The main objective of the ads appears to be financial rather than political. The deepfakes and fake news articles generally invite Canadians to provide their contact information and invest an initial sum (typically around $350) on a fraudulent investment platform falsely presented as cautioned by the Canadian government, Mark Carney, or another party leader at the federal or provincial level. These imposter ads, fake news articles, and deepfake videos can undermine the credibility of both the targeted party leaders featured in the content and the news brands and journalists whose names, logos, or visual designs are being impersonated.
While the aim is primarily financial, the content of the ads and fake articles is often (AI-)generated based on the Canadian news cycle. This may explain why the content has (intentionally or unintentionally) become more political since the second week of the election campaign..
First, the fake news articles often feature criticisms or attacks towards party leaders that have circulated since the beginning of the election, including heated debates about Mark Carney’s financial assets or Carney and Poilievre’ ability to face Donald Trump. For example, one of the fake articles includes quotes in which Mark Carney is presented as being “an out-of-touch elite who merely represents a continuation of Trudeau’s failed policies” and is accused of having “sold out Canada.” The same article includes a quote from Mark Carney which indicates that “Pierre Poilievre’s plan will leave us divided and ready to be conquered, because a person who worships at the altar of Donald Trump will kneel before him, not stand up to him.”
Image 2. Example of polarizing content included in the fake news articles
Second, some of the Facebook ads feature headlines and text that are directly related to the election (e.g., “WHO DO YOU TRUST in Election 2025?”) or have a strong pro-liberal stance: “Breaking: Carney Destroys Poilievre’s Argument in Seconds!”, “Canadians were left speechless as Mark Carney exposed Poilievre in a fiery debate. Watch what happened!”, “Poilievre Exposed - Carney Drops the Facts”, “This wasn’t just a debate - it was a reckoning. Carney’s takedown of Poilievre has the nation buzzing”, “Official: Mark Carney exposes the real agenda behind Poilievre’s plan.” Some ads have also called Carney “Canada's next PM” which provoked a strong response given the election has not yet occured.
Third, we have identified pages using the Liberal logo, and links included in some of the ads could be falsely associated with the Liberal Party (e.g., liberal.ca) or Mark Carney (e.g., markcarney.ca, carney-gov.com).
Image 3. Examples of use of the Liberal Party’s logo and name
Finally, many ads across platforms feature Canadian party leaders being arrested, shot or beaten up for revealing the truth, a strategy that was also used in Twitter ads in January.
Image 4. Examples of Canadian party leaders being arrested, shot or beaten up in ads on Facebook, Instagram, and YouTube
Although the content is political in nature, we have no evidence yet that it will impact the election outcome but will continue to monitor. Most Canadians recognize the content as fake (see next section). Even when they perceive the content as legitimate, research shows that exposure to a small number of ads or media content has a minimal impact on vote choice, given that people’s pre-existing political views shape how they interpret the content.
Nevertheless, the content could have an influence on broader attitudes, such as trust in party leaders, media organizations, or election processes (for example, comments on the ads indicate that they and the content they promote are perceived as foreign interference by some Canadians). The potential increase in faked news outlet reporting likely undermines public trust in their brands as well as increase news avoidance. Past studies have found that readers "check-out" in low trust news environments, a clear risk if more efforts to remove these clear violations of fairness in advertising are not in place
Are Canadians generally able to detect that the content is fake?
A majority of Canadians seem to identify the content as fake upon exposure. In the survey we are fielding every day of the campaign, we asked a nationally representative sample of Canadians (2,656 respondents) whether they have encountered “a social media post or webpage that falsely presented itself as a legitimate news source (e.g., CBC, Toronto Sun, La Presse) by imitating its name, logo, or design” over the past month. About one in four surveyed Canadians (24%) indicated that they encountered such content, 51% reported not having encountered it, while 24% were unsure, suggesting possible unrecognized exposure. Of the 24% exposed, 60% immediately recognized that the content was fake , while 40% only recognized it later. Levels of exposure and ability to recognize the content as fake have remained consistent over time since this question was added to the survey (April 3).
To better understand Canadians’ reactions to this content, we analyzed 770 public comments posted on the ads masquerading as legitimate news sources. We classified each comment as: 1) believing the falsehood; 2) flagging that the content is fake; or 3) not possible to determine. The findings indicate that about three quarters of the comments flagged the content as fake. It is important to interpret these results with caution, given that users who recognize the content as fake may be more likely to comment and that comments may not be representative of the broader audience that viewed the ads but decided not to engage. Additional analyses suggest that a large proportion of the remaining comments expressed negative views towards Mark Carney (generally the main actor in the ads and fake articles analyzed), the Liberal Party, or the CBC.
What techniques are being used to spread the content and defraud Canadians?
We identified several questionable practices that enable the actors behind the scam to cheaply produce and disseminate large volumes of fake content that appear to respond to shifts in the Canadian political landscape. One such practice is laundering or plagiarizing content from legitimate news sources and repurposing it to promote their fake investment platform. This could increase the perceived legitimacy of the fake content, given that Canadians can find the quotes or similar news segments as those presented in the fake articles when searching on the Internet. We provide three examples below.
The first example shows the content of a real CBC article being laundered and repurposed to promote the cryptocurrency scam.
Image 5. Example of news content repurposing: CBC article
The same CBC article included a video segment published on TikTok, which was repurposed as a deepfake in the fraudulent article to promote the investment scam.
Image 6. Example of news content repurposing: CBC TikTok video
Finally, the third example shows that the title, image style, and quotes in the fake CBC article quoted in Image 2 are heavily borrowed from an existing column published in the Calgary Herald.
Image 7. Calgary Herald column repackaged as fake CBC article
Beyond plagiarizing existing content, the source code of the fake CBC articles reveals that the visuals and design, as well as some of the text, are being imported from the same core URLs (e.g., blessadsteam.com, a domain registered via a U.S.-based third party named PrivacyGuardian), allowing for an easy duplication of the fake news articles across multiple domains.
What actions are being taken by Meta to address these fake ads?
The ads discussed in this report are against Meta’s policies, which prohibit the dissemination of advertisements aimed at defrauding or impersonating individuals or brands. Meta recognizes that “this is an ongoing industry-wide challenge” and indicates that they have opened an investigation into this matter. So far, the response appears to have been inconsistent and insufficient for preventing these ads from spreading.
Investigating the scale of this problem and how it is being addressed poses a number of challenges. First, the Ad Library only provides information about the number of impressions and targeted audiences for ads about social issues, elections or politics, but most of the ads discussed in this report were not identified as political in the Ad Library itself. Second, not all the ads were visible in the Ad Library while we were encountering them on the platform, although some of them had been live for more than a day. Finally, (non-political) ads get removed from the Ad Library when taken down by Meta, which prevents further investigation.
Multiple ads got removed from Facebook and the Ad Library while we were investigating them, but others remained active for multiple days. One tactic used by several pages is advertising shoes alongside fake news content—likely to appear more like legitimate vendors and avoid detection by the platform.
Image 8. Fake news pages also advertise shoes
Of the roughly 40 pages we identified since the start of the election for running ads that mimic legitimate news and promote fake investment platforms, only about 40% had been disabled as of April 16. While removing these ads and accounts is necessary, a reactive approach hasn’t stopped Canadians from being exposed—new Facebook pages continue to appear daily, posting similarly deceptive content.
Recommendations:
Recommendations for the general public: Canadians should be aware that Canadian news content has been banned from Facebook. We invite Canadians to report ads and pages masquerading as legitimate news sources or including deepfakes of politicians or other public figures to the social media platform every time they encounter them to accelerate their removal. Canadians should generally avoid clicking on the links in these ads, given that they could be unsafe.
The easiest ways to evaluate whether the content is fake is to check the URL to see if it matches the name of the media organization, search whether other credible sources have reported on the same event, or examine the page of the post creator on social media to evaluate whether it is reliable (is the source known? Does the source have many followers? Was the page created very recently?)
Social media platforms must adopt a dramatically more proactive and transparent approach to curbing fraudulent advertising — especially during critical periods such as elections. While some offending content has been removed, the persistent reappearance of deceptive ads indicates a systemic and unacceptable failure in both detection and enforcement mechanisms. We recommend:
Implementing stricter ad review policies for new advertisers, especially when political figures or news brands are referenced.
Requiring identity verification for political advertisers and those referencing news organizations.
Providing transparency into the volume and nature of removed content, to allow for independent auditing and accountability.
Dramatically strengthening automated and human content moderation to detect AI-generated deepfakes and mimicry of legitimate media outlets in real-time. At a minimum, implement flags for ads masquerading as news outlets (e.g., ads using their name or logo that link to another page).
In the interest of transparency, particularly during elections, we recommend that Meta clearly disclose—directly within Canadian users’ feeds—that official news content is restricted on its platforms.
The Canadian government should strengthen and enforce legislation to better protect the public from coordinated digital deception, especially when driven by financial or political motives. At a minimum, we recommend that a newly elected government require full disclosure for all political and financial ads online at all times, not only in elections, —including clear source attribution and verified advertiser identity—and empower regulators to audit platforms for compliance and issue penalties for repeated violations.
For media inquiries, please contact Isabelle Corriveau at isabelle.corriveau2@mcgill.ca.