By Canadian Press on March 27, 2025.
VANCOUVER — Canadian police patrolling corners of the dark web are well aware of the commonly nefarious ways criminals exploit artificial intelligence.
There’s deepfake pornography. Voice impersonation. Romance scams that turn into financial fraud.
But recently there’s been a new twist — criminals offering to “jailbreak” the very algorithms that form the architecture of AI’s large language models, or LLMs, tearing down their safeguards so they can be retasked for criminal purposes.
Call it tech support for cybercriminals.
“There are also these LLMs that cyber criminals themselves build,” said Chris Lynam, the director general of the RCMP’s National Cyber Crime Coordination Centre.
“There is this … whole underground cybercriminal community that operates on forums, but they also operate on platforms like Telegram,” he said, where they “advertise” a channel or service and explain how to procure it.
“Cyber criminals wouldn’t invest money in trying to jailbreak (AI) out of just goodwill. They are trying to commercialize this.”
AI-related crimes involving fraud and impersonation are increasingly commonplace and have become the subject of public awareness campaigns. But experts and police warn that additional safeguards may be needed to protect the public from new criminal frontiers of the technology.
These include the jailbreaking of AI and its guardrails that could result in AI giving instructions how to defraud charities — or how to build a bomb, said AI researcher Alex Robey.
Robey, a post-doctoral researcher at Carnegie Mellon University, has focused on the jailbreaking of these large language models, like ChatGPT.
He warns that AI could potentially commit crimes unprompted, “when AI itself has its own goals or intentions that align with an agenda that is not helpful to humans.”
For example, in Florida, a mother is suing a company that allows users to create AI companions, alleging that one of the company’s chatbots engaged in sexually charged conversations with her 14-year-old son before pushing him into killing himself last year.
Robey said there is “a lot of research” into how artificial intelligence may develop its own intentions and might mislead or harm people, particularly in robotics where the bot could physically interact with people in the real world.
“That’s where the enormous risk lies here, in the sort of weaponization of all of this stuff,” he said, pointing to the potential use of AI and robots in military action.
Robey says legislation is needed to ensure better safeguards for the use of artificial intelligence.
“It’s almost like it’s on the individual labs that are developing these models to self-regulate, which is not ideal,” he said in an interview.
“It’s the Wild West in all of this stuff right now. There’s no driving sort of principles really in place.”
NO QUICK FIXES
The launch of the chatbot ChatGPT in late 2022 was a dam-break moment for AI, thrusting it into the public consciousness with its uncanny ability to replicate humanlike conversation and writing.
Just over two years later, authorities in the United States said it was used to help orchestrate a high-profile car bombing. Matthew Livelsberger, 37, died in a Tesla cybertruck that exploded outside a Trump hotel in January.
Police say an investigation of Livelsberger’s searches through ChatGPT indicate he was looking for information on explosive targets and ammunition.
“This is the first incident that I’m aware of on U.S. soil where ChatGPT is utilized to help an individual build a particular device,” said Kevin McMahill, sheriff of the Las Vegas Metropolitan Police Department. He called the use of generative AI a “game-changer.”
Robey pointed to the Trump hotel explosion as a recent example of jailbreaking AI for illegal purposes.
Before ChatGPT there were already AI image-based tools, such as DALL-E, an AI model released in 2021 capable of producing realistic or stylized images based on text prompts, or instructions.
The criminal potential of that technology, too, swiftly materialized.
In 2023, a 61-year-old Quebec man was jailed for using artificial intelligence to produce deepfake videos of child pornography. No actual children were depicted, but Steven Larouche had broken the law banning any visual representation of someone depicted as being under the age of 18 engaged in explicit sexual activity.
Provincial court judge Benoit Gagnon wrote in his ruling that he believed it was the first case in the country involving deepfakes of child sexual exploitation.
Lynam said “cybercrime is arguably the most rapidly evolving type of crime.”
“It’s a constant evolution and we’ve got to understand the trends, or try to understand what technologies the criminals are going to adopt so that we can counter that,” he said in an interview.
Canadian authorities including Lynam are warning that it can be difficult to differentiate AI-generated content sources from legitimate ones, reflecting international industry warnings.
Deloitte’s centre for financial services forecast last year that AI could see fraud losses more than triple to US$40 billion in the United States alone by 2027.
It cited the case of an employee of a Hong Kong firm sending US$25 million to fraudsters in January last year, after receiving instructions from her chief financial officer in a video meeting with her and other staff. But it wasn’t the executive at all — it was a deepfake video impersonation.
“(The) ready availability of new generative AI tools can make deepfake videos, fictitious voices, and fictitious documents easily and cheaply available to bad actors,” the research note said, warning of the “democratization” of fraud-enabling AI software.
“There is already an entire cottage industry on the dark web that sells scamming software from US$20 to thousands of dollars.”
Lynam said AI jailbreaking services being sold on the dark web “shows you how adaptive and innovative these folks are, and the reason why we’ve got to be just as adaptive, or try and keep pace with them.”
The RCMP launched the National Cyber Crime Coordination Centre in 2020 in attempt to combat the explosion of internet crime. Lynam said it has since seen an uptick in AI-facilitated crime, from phishing emails to deepfake investment scams.
He said the RCMP has also launched a variety of campaigns to target at-risk individuals before they develop into cybercriminals.
“This is not going away. We’ve got to figure out the best ways to reduce the impact on Canadians.”
‘NOT AGILE ENOUGH’
An immediate legislative fix isn’t likely.
Because Canada’s Parliament was prorogued on Jan. 6 all bills in progress died, including the Artificial Intelligence and Data Act, which was to be Canada’s first attempt at broad-based regulation of artificial intelligence systems.
The government had said the act would “ensure that AI systems deployed in Canada are safe and non-discriminatory and would hold businesses accountable for how they develop and use these technologies.”
Robey said governments seem well intentioned, while being “fairly confused at this moment in time and not agile enough to keep up.”
“It’s a fairly frustrating set of conversations to be a part of, because I don’t think there’s a lot of understanding about what the technology is, what it can do and what the future sort of risks are,” he said.
In lieu of legislation or regulation, authorities have launched awareness campaigns to educate and encourage people to protect themselves.
In January, the BC Securities Commission launched an awareness campaign highlighting how artificial intelligence tools are being used to defraud Canadians.
The centrepiece advertisement for the $1.8 million campaign features deepfaked six-fingered accountants, as well as nods to AI voice cloning and fake romance scams.
The campaign tells would-be investors to reduce AI-related risks by looking for negative reviews of investment offerings, using the National Registration Search to ensure companies are legitimate, and considering meeting people in person to confirm their identities.
Pamela McDonald, a spokeswoman for the commission, said that because AI fraud is often borderless, it’s difficult to use usual enforcement tools.
“The problem is that for us as a regulator, the people using this technology are largely offshore, organized crime.”
She said criminals overseas were often “out of reach” of Canadian regulators and law enforcers.
“So, the best thing we can do right now for people is to educate them,” she said. “The best thing we can do is use a very edgy, bold, out-there campaign to break through all the other noise … to help people understand what the problem is and what they can do to protect themselves.”
— With files by the Associated Press.
This report by The Canadian Press was first published March 27, 2025.
Brieanna Charlebois, The Canadian Press
58