AI and Canada's proposed Online Harms Act

Canada wants to hold AI companies accountable with proposed legislation | GZERO AI

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes at a look at the Canadian government’s Online Harms Act, which seeks to hold social media companies responsible for harmful content – often generated by artificial intelligence.

So last week, the Canadian government tabled their long-awaited Online Harms legislation. Similar to the Digital Services Act in the EU., this is a big sweeping piece of legislation, so I won't get into all the details. But essentially what it does is it puts the onus on social media companies to minimize the risk of their products. But in so doing, this bill actually provides a window in how we might start regulate AI.

It does this in two ways. First, the bill requires platforms to minimize the risk of exposure to seven types of harmful content, including self-harm content directed to kids or posts that incite hatred or violence. The key here is the obligation is on social media platforms, like Facebook or Instagram or TikTok, to minimize the risk of their products, not to take down every piece of bad content. The concern is not with all of the each individual pieces of content, but the way that social media products and particularly their algorithms might amplify or help target its distribution. And these products are very often driven by AI.

Second, one area where the proposed law does mandate a takedown of content is when it comes to intimate image abuse, and that includes deepfakes or content that's created by AI. If an intimate image is flagged as non-consensual, even if it's created by AI, it needs to be taken down within 24 hours by the platform. Even in a vacuum, AI generated deepfake pornography or revenge porn is deeply problematic. But what's really worrying is when these things are shared and amplified online. And to get at that element of this problem, we don't actually need to regulate the creation of these deepfakes, we need to regulate the social media that distributes them.

So countries around the world are struggling with how to regulate something as opaque and unknown as the existential risk of AI, but maybe that's the wrong approach. Instead of trying to govern this largely undefined risk, maybe we should be watching for countries like Canada who are starting with the harms we already know about.

Instead of broad sweeping legislation for AI, we might want to start with regulating the older technologies, like social media platforms that facilitate many of the harms that AI creates.

I'm Taylor Owen and thanks for watching.

More from GZERO Media

AI adoption starts in the C-suite | Global Stage

Successful adoption of AI in business requires more than just access to tools, says Eurasia Group's Caitlin Dean in a Global Stage discussion at the 2025 UN STI Forum.

[OLD]Why Sen. Chris Van Hollen stood up to Trump | GZERO World with Ian Bremmer

In the latest episode of GZERO World, Ian Bremmer speaks with Maryland Senator Chris Van Hollen about his recent trip to El Salvador and his broader concerns over the Trump administration’s abuse of executive power.

Albanian opposition leader Sali Berisha casts his vote at a polling station during parliamentary elections in Tirana, Albania, on May 11, 2025.
IMAGO/Matrix Images via Reuters Connect

For all the talk of a US-Europe split, US President Donald Trump’s supporters are rather invested in elections on the continent.

US Treasury Secretary Scott Bessent and US Trade Representative Jamieson Greer address the media after trade talks with China in Geneva, Switzerland, on May 11, 2025.
Keystone/EDA/Martial Trezzini/Handout via REUTERS

The United States and China both agreed to slash tariffs by 115 percentage points each for 90 days following talks in Geneva over the weekend.

Vice President JD Vance participates in a Q&A with Munich Security Conference Foundation Council President Wolfgang Ischinger at the Munich Leaders' Meeting in Washington, DC, on May 7, 2025.
Munich Security Conference.

GZERO's Emilie Macfie reflects on a week of discussions between top European and American leaders at the Munich Security Conference's Washington, DC installment.

Customizing AI strategies for every region, culture, and language is critical | Global Stage

As artificial intelligence races ahead, there’s growing concern that it could deepen the digital divide—unless global inclusion becomes a priority. Lucia Velasco, AI Policy Lead at the United Nations Office for Digital and Emerging Technologies, warns that without infrastructure, local context, and inclusive design, AI risks benefiting only the most connected parts of the world.

AI can only help people who can access electricity and internet | Global Stage

Hundreds of millions of people now use artificial intelligence each week—but that impressive number masks a deeper issue. According to Dr. Juan Lavista Ferres, Microsoft’s Chief Data Scientist, Corporate Vice President, and Lab Director for the AI for Good Lab, access to AI remains out of reach for nearly half the world’s population.

OSZAR »