Author Archives: Emily Laidlaw

About Emily Laidlaw

Associate Professor. Member of the Alberta Bar. Please click here for more information.

The Online Harms Bill – Part 2 – Private Messaging

By: Sanjampreet Singh and Emily Laidlaw

Matter Commented On: Online Harms Bill C-63

PDF Version: The Online Harms Bill – Part 2 – Private Messaging

This is the second in a series of posts about the Online Harms Bill C-63, proposed federal legislation the stated aims of which are to reduce harmful content online, hold social media platforms accountable, promote safety and child protection, empower users and victims, and increase transparency.

This post examines the social media services that would be regulated by the proposed Online Harms Act (Act) and potentially investigated by the Digital Safety Commission. More specifically, this post focuses on what is excluded from this Bill – private messaging – a “wicked problem” in online harms where one is damned if you do or damned if you don’t include it. We propose a middle path.

Continue reading

The Online Harms Bill – Part 1 – Why We Need Legislation

By: Emily Laidlaw

Matter Commented On: Bill C-63, An Act to enact the Online Harms Act, to amend the Criminal Code, the Canadian Human Rights Act and An Act respecting the mandatory reporting of Internet child pornography by persons who provide an Internet service and to make consequential and related amendments to other Acts, 1st Sess, 44th Parl, 2024 (Online Harms Bill)

PDF Version: The Online Harms Bill – Part 1 – Why We Need Legislation

This is the first in a series of posts that will unpack the Online Harms Bill C-63. In this first post, I will explain how and why we got here, as there is a significant amount of misunderstanding about what this Bill is about and why we might need it before the merits of this Bill are examined. It also important to contextualize the Bill within the law of intermediary liability, the law that applies to technology companies that facilitate transactions between third parties. Unlike many other jurisdictions, Canada operates in a relative legal vacuum in this space. Continue reading

Online Age Verification is Crucial and Bill S-210 Gets It Wrong

By: Emily B. Laidlaw

Matter Commented On: Bill S-210, An Act to restrict young persons’ online access to sexually explicit material, 1st Sess, 44th Parl, 2021.

PDF Version: Online Age Verification is Crucial and Bill S-210 Gets It Wrong

Age verification is a tool that verifies a user’s age before permitting them to access certain online content, websites, or apps. It is primarily advocated for the purpose of verifying the ages of users and creators on pornography sites. Age verification can have wider application and has been proposed as a solution to an array of child safety issues on social media, including algorithms pushing content about eating disorders, self-harm, misinformation, and viral “challenges”, to luring and cyber-bullying. For example, many platforms ban users under 13 years old and/or have child protection measures for 13-17-year-olds, such as blocking direct messaging, limiting screen time, or curating age-appropriate content. TikTok, for example, has such tools, but relies entirely on user self-verification of age and encouragement of parental oversight (such as their service, Family Pairing). Continue reading

The Federal Government’s Proposal to Address Online Harms: Explanation and Critique

By: Darryl Carmichael and Emily Laidlaw

PDF Version: The Federal Government’s Proposal to Address Online Harms: Explanation and Critique

Commented On: The Federal Government’s proposed approach to address harmful content online

In late July, the Federal Government introduced its proposal for online harms legislation for feedback. It comprises a discussion paper outlining the government’s approach to regulating social media platforms and a technical paper that provides more detail on the substance of the proposed law. The proposal is part of a suite of law reform efforts by the Canadian government concerning what can broadly be categorized as platform regulation and content regulation issues. They include Bill C-10 to reform broadcasting laws, which stalled when it hit the Senate floor (for now at least) and proposed legislation to combat hate speech and hate crimes. The timing of the online harms and hate speech proposals has been a point of contention so close to the election call. Regardless of the election result in September, it is worthwhile analyzing this proposal because the Canadian government will need to prioritize law reform in this area. Online harms legislation is sweeping the globe, and Canada is well overdue to address these issues. For better or worse (as remains to be seen), new laws have been proposed or passed in Europe, the United Kingdom (UK), Australia, India, and Turkey, to name a few. Continue reading

Protection Against Online Hate Speech: Time for Federal Action

By: Emily Laidlaw & Jennifer Koshan, with Emma Arnold-Fyfe, Lubaina Baloch, Jack Hoskins, and Charlotte Woo

PDF Version: Protection Against Online Hate Speech: Time for Federal Action

Legislation Commented On: Canadian Human Rights Act, RSC 1985, c H-6

Editor’s Note

During Equity, Diversity and Inclusion (EDI) Week at the University of Calgary in February 2021, the Faculty of Law’s EDI Committee held a research-a-thon where students undertook research on the law’s treatment of equity, diversity and inclusion issues. Over the next few weeks, we will be publishing a series of ABlawg posts that are the product of this initiative. This post is the first in the series, which also closely coincides with the International Day for the Elimination of Racial Discrimination next week on March 21. The theme this year is “Youth Standing Up Against Racism”, which fits well with this initiative.

Introduction

On January 5th, 2021, Erin O’Toole, leader of the Conservative Party of Canada, tweeted “Not one criminal should be vaccinated ahead of any vulnerable Canadian or front line health worker.” His tweet unsurprisingly went viral. To date the tweet has received 6.1k likes, 3.6k retweets and 4.8k comments. The tweet is representative of the kind of internet content we have grown increasingly and painfully accustomed to: content that is rhetorical, overblown, and often hateful, even if not explicitly directed at marginalized groups,  and that occurs on a platform with global reach. When Erin O’Toole tweets, it is to an audience of 122.7k followers.

This post is not about Erin O’Toole’s tweet per se. Indeed, while his tweet dehumanizes prisoners and those with a criminal record, persons who are disproportionately Indigenous, it is not obvious, on its face, that it meets the legal standard of hate speech. Rather, this post is about what tweets like his represent in the struggle to regulate hate speech online: that so much we intuitively know is wrong falls into a legal grey area, and that much of the harm is the mob pile-on that the original post inspires. In the case of the O’Toole tweet, many tweets in response have been removed by Twitter, but it is noteworthy that thousands of others addressed the harmful nature of his statements with tweets such as “prison health is public health”, recognizing the risk of COVID-19 transmission in prisons.

Continue reading