By Canadian Press on August 10, 2025.
At first it appears to be a quirky video clip generated by artificial intelligence to make people laugh.
In it, a hairy Bigfoot wearing a cowboy hat and a vest emblazoned with the American flag sits behind the wheel of a pickup truck.
“We are going today to the LGBT parade,” the apelike creature says with a laugh. “You are going to love it.”
Things then take a violent and disturbing turn as Bigfoot drives through a crowd of screaming people, some of them holding rainbow flags.
The clip posted in June on the AmericanBigfoot TikTok page has garnered more than 360,000 views and hundreds of comments, most of them applauding the video.
In recent months similar AI-generated content has flooded social media platforms, openly promoting violence and spreading hate against members of LGBTQ+, Jewish, Muslim and other minority groups.
While the origin of most of those videos is unclear, their spread on social media is sparking outrage and concern among experts and advocates who say Canadian regulations cannot keep up with the pace of hateful AI-generated content, nor adequately address the risks it poses to public safety.
Egale Canada, an LGBTQ+ advocacy organization, says the community is worried about the rise of transphobic and homophobic misinformation content on social media.
“These AI tools are being weaponized to dehumanize and discredit trans and gender diverse people and existing digital safety laws are failing to address the scale and speed of this new threat,” executive director Helen Kennedy said in a statement.
Rapidly evolving technology has given bad actors a powerful tool to spread misinformation and hate, with transgender individuals being targeted disproportionately, Kennedy said.
“From deepfake videos to algorithm-driven amplification of hate, the harms aren’t artificial– they’re real.”
The LGBTQ+ community isn’t the only target, said Evan Balgord, executive director of the Canadian Anti-Hate Network. Islamophobic, antisemitic and anti-South Asian content made with generative AI tools is also widely circulating on social media, he said.
“When they create the environment where there’s a lot of celebration of violence towards those groups, it does make violence towards those groups happening in person or on the streets more likely,” Balgord warned in a phone interview.
Canada’s digital safety laws were already lagging behind and advancements in AI have made things even more complicated, he said.
“We have no safety rules at all when it comes to social media companies, we have no way of holding them accountable whatsoever.”
Bills aimed at addressing harmful online content and establishing a regulatory AI framework died when Parliament was prorogued in January, said Andrea Slane, a legal studies professor at Ontario Tech University who has done extensive research on online safety.
Slane said the government needs to take another look at online harms legislation and reintroduce the bill “urgently.”
“I think Canada is in a situation where they really just need to move,” she said.
Justice Minister Sean Fraser told The Canadian Press in June that the federal government will take a “fresh” look at the Online Harms Act but it hasn’t decided whether to rewrite or simply reintroduce it. Among other things, the bill aimed to hold social media platforms accountable for reducing exposure to harmful content.
A spokesperson for the newly crated Ministry of Artificial Intelligence and Digital Innovation said the government is taking the issue of AI-generated hateful content seriously, especially when it targets vulnerable minority groups.
Sofia Ouslis said existing laws do provide “important protections,” but admitted they didn’t aim to address the threat of generative AI when they were designed.
“There’s a real need to understand how AI tools are being used and misused — and how we can strengthen the guardrails,” she said in a statement. “That work is ongoing.”
The work involves reviewing existing frameworks, monitoring court decisions “and listening closely to both legal and technological experts,” Ouslis said. She added that Prime Minister Mark Carney’s government has also committed to making the distribution of non-consensual sexual deepfakes a criminal offence.
“In this fast-moving space, we believe it’s better to get regulation right than to move too quickly and get it wrong,” she said, noting that Ottawa is looking to learn from the European Union and the United Kingdom.
Slane said the European Union has been ahead of others in regulating AI and ensuring digital safety, but despite being at the “forefront,” there is a feeling there that more needs to be done.
Experts say regulating content distributed by social media giants is particularly difficult because those companies aren’t Canadian. Another complicating factor is the current political climate south of the border, where U.S. tech companies are seeing reduced regulations and restrictions, making them “more powerful and feeling less responsible, said Slane.
Although generative AI has been around for a few years, there’s been a “breakthrough” in recent months making it easier to produce good quality videos using tools that are mostly available for free or at a very low price, said Peter Lewis, Canada Research Chair in trustworthy artificial intelligence.
“I’ve got to say it’s really accessible to almost anybody with a little bit of technical knowledge and access to the right tools right now,” he said.
Lewis, who is also an assistant professor at Ontario Tech University, said that large language models such as ChatGPT have implemented safeguards in an effort to filter out harmful or illegal content.
But more needs to be done in the video space to create such guardrails, he said.
“You and I could watch the video and probably be horrified,” he said, adding “it’s not clear necessarily that the AI system has the ability to sort of reflect on what it has created.”
Lewis said that while he isn’t a legal expert, he believes existing laws can be used to combat the online glorification of hate and violence in the AmericanBigfoot videos. But he added the rapid development of generative AI and widespread availability of new tools “does call for new technological solution” and collaboration between governments, consumers, advocates, social platforms and AI app developers to address the problem.
“If these things are being uploaded…we need really robust responsive flagging mechanisms to be able to get these things off the internet as quickly as possible,” he said.
Lewis said using AI tools to detect and flag such videos helps, but it won’t resolve the issue.
“Due to the nature of the way these AI systems work, they’re probabilistic, so they don’t catch everything.”
This report by The Canadian Press was first published Aug. 10, 2025.
Sharif Hassan, The Canadian Press
40
I’ am making over $12k a month working online. I kept seeing how some people are able to earn a lot of money online, so I decided to look into it. I had luck to stumble upon something that totally changed my life. After 2 months of searching, last month I received a paycheck for $12853 for just working on the laptop for a few hours weekly. And best thing is…It’s so Easy… This is what I do==>>>follow instructions on this website…………..
.
==============================>>> https://short-link.me/17d7k