Saturday, February 7

Less than two years ago, a federal government report warned Canada should prepare for a future where, thanks to artificial intelligence, it is “almost impossible to know what is fake or real.”

Now, researchers are warning that moment may already be here, and senior officials in Ottawa this week said the government is “very concerned” about increasingly sophisticated AI-generated content like deepfakes impacting elections.

“We are approaching that place very quickly,” said Brian McQuinn, an associate professor at the University of Regina and co-director of the Centre for Artificial Intelligence, Data and Conflict.

He added the United States could quickly become a top source of such content — a threat that could accelerate amid future independence battles in Quebec and particularly Alberta, which has already been seized on by some U.S. government and media figures.

Story continues below advertisement

“We are 100 per cent guaranteed to be getting deepfakes originating from the U.S. administration and its proxies, without question,” said McQuinn. “We already have, and it’s just the question of the volume that’s coming.”

During a House of Commons committee hearing on foreign election interference on Tuesday, Prime Minister Mark Carney’s national security and intelligence advisor Nathalie Drouin said Canada expects the U.S., like all other foreign nations, to stay out of its domestic political affairs.

That came in response to the lone question from MPs about the possibility of the U.S. becoming a foreign interference threat on par with Russia, China or India.

The rest of the two-hour hearing focused on the previous federal election and whether Ottawa is prepared for future threats, including AI and disinformation.

“I do know that the government is very concerned about AI and the potentially pernicious effects,” said deputy foreign affairs minister David Morrison, who, like Drouin, is a member of the Critical Election Incident Public Protocol Panel tasked with warning Canadians about interference.




Canadian governments should regulate AI, 85% of Canadians say: poll


Asked if Canada should seek to label AI-generated content online, Morrison said: “I don’t know whether there’s an appetite for labelling specifically,” noting that’s a decision for platforms to make.

Story continues below advertisement

“It is not easy to put the government in the position of saying what is true and what is not true,” he added.

Ottawa is currently considering legislation that will address online harms and privacy concerns related to AI, but it’s not yet clear if the bill will seek to crack down on disinformation.

“Canada is working on the safety of that new technology. We’re developing standards for AI,” said Drouin, who also serves as deputy clerk of the Privy Council.


She noted that Justice Marie-Josée Hogue, who led the public inquiry into foreign interference, concluded in her final report last year that disinformation is the greatest threat to Canadian democracy — thanks in part to the rise of generative AI.

Get daily National news

Get the day’s top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

Addressing and combating that threat is “an endless, ongoing job,” Drouin said. “It never ends.”

The Privy Council Office told Global News it provided an “initial information session relating to deepfakes” to MPs on Wednesday, and would offer additional sessions to “all interested parliamentarians as well as to political parties over the coming weeks.”

Experts like McQuinn say such a briefing is long overdue, and that government, academia and media must also step up educating an already-skeptical Canadian public on how to discern truth from fiction.

“There should be annual training (for politicians and their staffs), not just on deepfakes and disinformation, but foreign interference altogether,” said Marcus Kolga, a senior fellow at the Macdonald-Laurier Institute and founder of DisinfoWatch.

Story continues below advertisement

“This needs leadership. Right now, I’m not seeing that leadership, but we desperately need it because all of us can see what is coming.”

Kolga also agreed there is “no doubt” that official U.S. government channels, and U.S. President Donald Trump himself, are becoming a major source of that content.

“The trajectory is rather clear,” he said. “So I think that we need to anticipate that that’s going to happen. Reacting to it after it happens isn’t all that helpful — we need to be preparing at this time.”

Threat growing from the U.S., researchers say

Morrison noted Tuesday that the elections panel, as well as the Security and Intelligence Threats to Elections (SITE) task force, did not observe any significant use of AI to interfere in last year’s federal election.

However, he added that “our adversaries in this space are continually evolving their tactics, so it’s only a matter of time, and we do need to be very vigilant.”

Story continues below advertisement

The Communications Security Establishment and the Canadian Centre for Cyber Security have issued similar warnings recently about hostile foreign actors further harnessing AI over the next two years against “voters, politicians, public figures, and electoral institutions.”

Researchers now say the U.S. is quickly becoming a part of that threat landscape.

McQuinn said part of the issue is the online disinformation that Canadians see is being spread primarily on American-owned social media platforms like X and Facebook, with TikTok now under U.S. ownership as well.

That has posed challenges to foreign countries trying to regulate content on those platforms, with European and British laws facing resistance and hostility by the companies and the Trump administration, which has promised severe penalties, including tariffs and even sanctions.

Digital services taxes that sought to claw back revenues for operating in foreign countries have been identified by the U.S. as trade irritants, with Canada’s tax nearly scuttling negotiations last year before it was revoked.

Kolga noted the spread of disinformation by U.S. content creators and platforms is not new, whether it originates from America or from elsewhere in the world. Other countries, including Russia, India and China, are known to use disinformation campaigns and have been identified in Canadian security reports as significant sources of foreign interference efforts.

Russia has also been accused of covertly funding right-wing influencers in the U.S. and Canada to push pro-Russian talking points and disrupt domestic affairs.

Story continues below advertisement

What is new, McQuinn said, is the involvement of Trump and his administration in pushing that disinformation, including AI deepfakes.




Trump defends AI image of himself as Pope, says Melania thought it was ‘cute’


While much of the content is clearly fake or designed to illicit a reaction — a White House image showing Trump and a penguin walking through an Arctic landscape suggested to be Greenland, or Trump sharing third-party AI content depicting him flying a feces-spraying fighter jet over protesters — there have been more subtle examples.

The White House was accused last month of using AI to alter a photo of a protester arrested in Minnesota during a federal immigration crackdown in the state to make the woman appear as though she were crying.

In response to criticism over the altered image, White House deputy communications director Kaelan Dorr wrote on X, “The memes will continue.” The image remains online.

Story continues below advertisement

“The present U.S. administration is the only western country that we know of (that) on a regular basis is publishing or sharing or promoting obvious fakes and deepfakes, at a level that has never been seen by a western government before,” McQuinn said.

He said the online strategy and behaviour matches that of common state disinformation actors like Russia and China, as well as armed groups like the Taliban, which don’t have “any respect” for the truth.

“If you don’t (have that respect), then you will always have an asymmetrical advantage against any actor, whether it’s state or non-state, who wants to in some way adhere to the truth,” he said.

“(This) U.S. administration will always have an advantage over Canadian actors because they no longer have any controls on them or restraints, because truth is no longer a factor in their communication.”




Gazans react to Trump AI video promoting plan for “Riviera of the Middle East”


McQuinn added his own research suggests 83 per cent of disinformation is passed along by average Canadians who don’t immediately realize the content they’re sharing is fake.

Story continues below advertisement

“It’s not that they necessarily believe in the disinformation,” he said. “Something looks kind of catchy or aligns with their ideas of the world, and they will pass it on without reading in the second or third paragraph that the idea that they agreed with now morphs into something else.

“The good news is that Canadians are learning very quickly” how to spot things like deepfakes, he added, which is creating “a certain amount of skepticism that is naturally cropping up in the population.”

Yet Trump’s repeated sharing of AI content online that imagines U.S. control of Canada — an homage to his “51st state” threats — as well as tacit support between U.S. administration figures and the Alberta independence movement has researchers increasingly worried.

“My real concern is that when Donald Trump does order the U.S. government to start supporting some of those narratives and starts actually engaging in state disinformation, in terms of Canada’s unity — when that happens, then we’re in real trouble,” Kolga said.

As AI ‘very quickly’ blurs truth and fiction, experts warn of U.S. threat

Share.

Leave A Reply

1 × three =

Exit mobile version