Protection Against Online Hate Speech: Time for Federal Action

By: Emily Laidlaw & Jennifer Koshan, with Emma Arnold-Fyfe, Lubaina Baloch, Jack Hoskins, and Charlotte Woo

PDF Version: Protection Against Online Hate Speech: Time for Federal Action

Legislation Commented On: Canadian Human Rights Act, RSC 1985, c H-6

Editor’s Note

During Equity, Diversity and Inclusion (EDI) Week at the University of Calgary in February 2021, the Faculty of Law’s EDI Committee held a research-a-thon where students undertook research on the law’s treatment of equity, diversity and inclusion issues. Over the next few weeks, we will be publishing a series of ABlawg posts that are the product of this initiative. This post is the first in the series, which also closely coincides with the International Day for the Elimination of Racial Discrimination next week on March 21. The theme this year is “Youth Standing Up Against Racism”, which fits well with this initiative.

Introduction

On January 5th, 2021, Erin O’Toole, leader of the Conservative Party of Canada, tweeted “Not one criminal should be vaccinated ahead of any vulnerable Canadian or front line health worker.” His tweet unsurprisingly went viral. To date the tweet has received 6.1k likes, 3.6k retweets and 4.8k comments. The tweet is representative of the kind of internet content we have grown increasingly and painfully accustomed to: content that is rhetorical, overblown, and often hateful, even if not explicitly directed at marginalized groups,  and that occurs on a platform with global reach. When Erin O’Toole tweets, it is to an audience of 122.7k followers.

This post is not about Erin O’Toole’s tweet per se. Indeed, while his tweet dehumanizes prisoners and those with a criminal record, persons who are disproportionately Indigenous, it is not obvious, on its face, that it meets the legal standard of hate speech. Rather, this post is about what tweets like his represent in the struggle to regulate hate speech online: that so much we intuitively know is wrong falls into a legal grey area, and that much of the harm is the mob pile-on that the original post inspires. In the case of the O’Toole tweet, many tweets in response have been removed by Twitter, but it is noteworthy that thousands of others addressed the harmful nature of his statements with tweets such as “prison health is public health”, recognizing the risk of COVID-19 transmission in prisons.

In this post we ask: what is, and what should be, the scope of legal protection against online hate speech? Are there gaps in the law that should be filled? There are a few reasons why we examine this question now. First, the Minister of Heritage, Steven Guilbeault, indicated that the much anticipated online harms legislation should be introduced shortly (see, for example, his comments here and here, and the Supplementary Mandate Letter), so now is a good time to analyze the landscape of hate speech laws and identify law reform we hope the bill addresses.

Second, there is evidence that hate speech is on the rise, including reports during the pandemic of increasing racist violence. And social media provides a unique platform for the spread of hate speech and radicalization. Notably, the recent anti-mask rally in Calgary using tiki torches was promoted online with a photo from the white nationalist march in Charlottesville, Virginia in 2017. Alex Minassian, recently convicted of 10 counts of first degree murder for his 2018 van attack on Toronto’s Yonge Street, said he was radicalized online into the incel subculture. The Pittsburgh Synagogue shooter was a frequent contributor to Gab, a platform known for its tolerance of extremist content. Months before the attack on the Capitol Building in Washington DC in January 2021, the Anti-Defamation League warned that Parler, a popular platform for many who participated in the attack that day, was becoming a haven for radicalization. While not all radical speech is hate speech, they are often linked and we highlight these examples to situate the extent of the problem of hate speech online. We also seek to debunk the argument that freedom of expression is (or should be) absolute, which is sometimes advanced in defence of hate speech.

Hate Speech – Current Protections and Gaps 

Hate speech is currently an offence under the federal Criminal Code, RSC 1985, c C-46, as well as prohibited under provincial (but not federal) human rights legislation. We will address each in turn.

The Criminal Code

Under the Criminal Code, sections 319(1) and 319(2) prohibit (1) the public incitement of hatred against an identifiable group that is likely to lead to a breach of the peace, and (2) the communication of statements that wilfully promote hatred against an identifiable group. An identifiable group is defined in section 318(4) to mean “any section of the public distinguished by colour, race, religion, national or ethnic origin, age, sex, sexual orientation, gender identity or expression, or mental or physical disability.” One major limitation of these provisions is that no prosecutions can be undertaken under section 319(2) without the consent of the Attorney General (see section 319(6)). In addition, judicial interpretation of section 319 has limited the application of the criminal hate speech provisions. In R v Keegstra, 1990 CanLII 24 (SCC), [1990] 3 SCR 697, a majority of the Supreme Court upheld section 319(2) as a reasonable limit on freedom of expression under the Charter, but they defined hatred narrowly to include “emotion of an intense and extreme nature that is clearly associated with vilification and detestation” (at 777). Writing for the majority, Chief Justice Dickson went on to say that:

Hatred is predicated on destruction, and hatred against identifiable groups therefore thrives on insensitivity, bigotry and destruction of both the target group and of the values of our society.  Hatred in this sense is a most extreme emotion that belies reason; an emotion that, if exercised against members of an identifiable group, implies that those individuals are to be despised, scorned, denied respect and made subject to ill-treatment on the basis of group affiliation. (at 777)

The criminal provision therefore only applies to the most egregious forms of hate speech, recognizing that limits on freedom of expression that result in criminalization should be relatively narrowly construed.

Canadian Human Rights Act

Canada formerly included a protection against hate speech in the Canadian Human Rights Act, RSC 1985, c H-6 (CHRA). Section 13 stated that it was discriminatory to “communicate telephonically or to cause to be so communicated … by means of the facilities of a telecommunication undertaking … any matter that is likely to expose a person or persons to hatred or contempt by reason of the fact that that person or those persons are identifiable on the basis of a prohibited ground of discrimination.” Under section 3 of the CHRA, prohibited grounds of discrimination are “race, national or ethnic origin, colour, religion, age, sex, sexual orientation, gender identity or expression, marital status, family status, genetic characteristics, disability and conviction for an offence for which a pardon has been granted or in respect of which a record suspension has been ordered.” In 2001, the CHRA was amended by adding subsection 13(2), which provided that subsection 13(1) applied to online material communicated by way of the Internet. However, section 13 was repealed by the Harper government in June of 2013 in An Act to Amend the Human Rights Act (protecting freedom), SC 2013, c 37, section 2.

The repeal of section 13 left a gap at the federal level. Unlike in the criminal law, where consent of the Attorney General is required, any individual or group of individuals with reasonable grounds to believe a person has engaged in a discriminatory practice could file a complaint with the Canadian Human Rights Commission (CHRA, section 40(1)). In addition, the scope of the hate speech provisions in human rights legislation has been interpreted more broadly than under the Criminal Code. In Canada (Human Rights Commission) v Taylor, 1990 CanLII 26 (SCC), [1990] 3 SCR 892, a companion case to Keegstra, Chief Justice Dickson defined “hatred” and “contempt” as “unusually strong and deep-felt emotions of detestation, calumny and vilification” (at 928). He noted that section 13 “may impose a slightly broader limit upon freedom of expression” than the Criminal Code does, but he nonetheless found “that the conciliatory bent of a human rights statute renders such a limit more acceptable than would be the case with a criminal provision” (at 928–29). Section 13 was thus seen as a reasonable limit on freedom of expression by a majority of the Court in Taylor. This finding was extended to subsection 13(2) and Internet communications in Lemire v Canada (Human Rights Commission), 2014 FCA 18 (CanLII). Justice Evans of the Federal Court of Appeal declined to accept the applicant’s arguments that Taylor should be distinguished in the case of online communications, stating that: “in view of the power of the Internet as a medium of communication, … I do not regard the ability and potential willingness of ISPs [internet service providers] to block or remove communications as in themselves sufficient to render section 13 more than a minimal impairment of section 2(b) rights” (at para 68).

Interestingly, at the debates before the House of Commons Standing Committee on Justice and Human Rights, which considered the Bill that repealed section 13, witnesses justified the repeal based on the broad scope of section 13 (see here). The argument was that, in spite of the decisions in Taylor and Lemire, section 13 was poorly drafted and thus operated as a censorship tool for legitimate, albeit controversial, speech. Richard Moon’s report was influential, reasoning that legal regulation of hate speech should be limited to criminal conduct, albeit prosecuted more often, and otherwise should be regulated through alternative mechanisms, such as by Internet Service Providers in their capacity as hosts and press councils. As we will discuss below, in the digital world of 2021, the de facto regulators are social media platforms like Facebook and Twitter, and search providers such as Google that set and moderate their own speech rules.

Provincial Human Rights Legislation

While section 13 was repealed federally, some provinces and territories include hate speech within their lists of prohibited activities. In Alberta for example, section 3 of the Alberta Human Rights Act, RSA 2000 c A-25.5, prohibits the publication, issuance or display before the public of any statement, publication, notice, sign, symbol, emblem or other representation that either    indicates discrimination against a person or a class of persons or is likely to expose a person or a class of persons to hatred or contempt based on protected grounds (which are race, religious beliefs, colour, gender, gender identity, gender expression, physical disability, mental disability, age, ancestry, place of origin, marital status, source of income, family status and sexual orientation). Similar provisions are included in human rights statutes in British Columbia (Human Rights Code, RSBC 1996, c 210, section 7), Saskatchewan (The Saskatchewan Human Rights Code, 2018, SS 2018, c S-24.2, section 14), and the Northwest Territories (Human Rights Act, SNWT 2002, c 18, section 13). These provisions are stand-alone protections against hate speech and do not need to be tied to other protected areas in human rights legislation, such as employment, tenancies, or services customarily available to the public (see Lund v Boissoin, 2012 ABCA 300 (CanLII)).

There has also been case law interpreting these provisions. For example, in Saskatchewan (Human Rights Commission) v Whatcott, 2013 SCC 11 (CanLII), the Supreme Court narrowed the hate speech provision in Saskatchewan’s human rights legislation by finding that the words “ridicules, belittles or otherwise affronts the dignity of” were overbroad and should be struck from the section. Applying the definition for hatred and contempt that was found to strike a proper constitutional balance in Taylor, the Court found that the overbroad wording in Saskatchewan’s Code did not “rise to the level of ardent and extreme feelings” that was required to achieve an appropriate constitutional balance (at para 89).

The Whatcott decision, as well as Keegstra and Taylor, indicate that the courts have been careful to construe hate speech provisions so as to limit their scope and their corresponding infringement on freedom of expression, while at the same time recognizing the crucial role these provisions play in responding to and preventing hatred and discrimination against marginalized groups.

However, there is a remaining issue with provincial human rights protections, which is a jurisdictional debate in the case law about whether they apply to online hate speech, or whether the federal government has exclusive jurisdiction over the regulation of online speech under the Constitution Act, 1867.

In Elmasry and Habib v Roger’s Publishing and MacQueen (No. 4), 2008 BCHRT 378 (CanLII), the BC Human Rights Tribunal held that it did not have jurisdiction over that portion of a human rights complaint alleging a violation of the hate speech provision of the BC Human Rights Code in relation to an online publication. This holding was based on the federal government’s exclusive jurisdiction over communication undertakings under the Constitution Act, 1867 (at paras 47-50). The Tribunal did note that, at the time, section 13 of the CHRA covered Internet-based communications that were alleged to constitute hate speech (at paras 48-49).

Since the repeal of section 13, some provincial human rights tribunals have shown more willingness to take jurisdiction to hear complaints that relate to Internet-based communications. In Chilliwack Teachers’ Association v Neufeld, 2021 BCHRT 6 (CanLII), an allegedly homophobic and transphobic Facebook post led to a complaint of a discriminatory publication under section 7 of the BC Human Rights Code. The Tribunal held that case law in this area did not suggest that all communications conducted over the Internet fell within federal jurisdiction exclusively. While the Tribunal took judicial notice that “Facebook is a web-based social networking site”, it noted that it had no evidence “to conclude that it is a federal undertaking or that regulation of an individual’s activity on Facebook is subject to exclusive federal jurisdiction” (at para 91). It therefore dismissed the argument that it lacked jurisdiction to consider the complaint and permitted it to proceed to a hearing.

A similar case arose earlier in Alberta. Descalchuk v Amber Carnegie, 2019 AHRC 47 (CanLII) related to a Facebook post that was alleged to contravene section 3 of the Alberta Human Rights Act for being racist and hateful. The Chief Commissioner ultimately found that there was not enough evidence to support advancement of the complaint to a hearing. The question of whether a social media post fell within the purview of section 3 was left unanswered as the complaint was found to be unmeritorious (at para 13).

The case law is thus unclear on whether provinces may have concurrent jurisdiction to deal with some hate speech that is found in online publications and postings. At best, provincial human rights legislation may sometimes fill the gap left by the repeal of section 13, but it must be emphasized that this is an uneven area and that not all provinces and territories protect against hate speech in any event.

Intermediary/Platform Regulation

Another option is to seek removal of the content from social media or de-index it from search results. These entities are known as intermediaries, because of their role in facilitating content sharing between third parties. Readers might be more familiar with the term platforms, which has been used more recently to refer to these entities’ power and influence in the marketplace. Content takedown is useful to arrest further circulation of the hateful content and thereby minimize harm to members of the target groups. It is not a perfect mechanism, however. Once content goes viral it is almost impossible to put the toothpaste back in the tube and in any event there is harm caused by the initial post. Further, sometimes the wrong kind of content is taken down, such as posts documenting human rights abuses in Syria or Black Lives Matter posts aimed at calling out racism.  Nevertheless, content moderation is an important and practical mechanism to cope with social media harms as a complement to law and communicates the importance of civic discourse.

In Europe, once a platform has knowledge or awareness it is hosting unlawful content, including hate speech, it must “act expeditiously to remove or to disable access to the information” otherwise risk liability for the underlying wrong (E-Commerce Directive, article 14). Canadian law is silent as to whether an intermediary might be liable for hate speech, although a judge may order that hate speech is removed by an internet provider (Criminal Code, section 320.1). In practice, most hate speech is regulated through content moderation practices of the various platforms, set down through their terms of service. Each platform creates their free speech rules and thus platforms like Parler and Gab tolerate extremist content, while Facebook does not. Content moderation is an important process to address harmful content, but it operates as a form of shadow regulation. This lack of formal regulation begs for closer scrutiny of platform practices and highlights that content moderation is not a complete regulatory solution to the problems of online hate speech. As a system of governance it is complex, inconsistent, and controversial (see here).

Support for Reinstating Section 13

With the expectation of new online harms legislation, we hope that section 13 of the CHRA is re-introduced in some form. There are several reasons we argue for its reinstatement.

First, freedom of expression asks a lot of us. Members of society are asked to stomach deeply offensive speech based on faith in an idea – that the circulation of ideas serves a grander plan, that it helps in the quest for truth, that it helps us develop our sense of identity and self-worth and that our democracy is strengthened by it. However, we do not all bear the burden of this system equally. Mary Ann Franks dismantles this traditional framework of free speech, calling it “free speech elitism.” The burden of free speech is primarily borne by marginalized groups. The impact on individuals targeted online is profound. Empirical research by Jon Penney shows the chilling effects of online abuse on freedom of expression and the rights of victims. For hate speech, the effect is that marginalized groups are unable to engage in the same way, or at all, in online spaces. If we are committed to freedom of expression, then we should equally be committed to enabling the right for everyone.

The response might be that law provides this balancing mechanism through the Criminal Code hate speech provisions, but the definition is so narrow that it only serves to condemn the worst forms of hate speech. This is arguably the appropriate balance for criminal law, as recognized in Keegstra and through the requirement of the Attorney General’s consent to prosecutions as well as the Criminal Code’s intent requirement (the wilful promotion of hatred). However, it makes the case for the role of the Canadian Human Rights Commission and Tribunal stronger, not as a censorship body, but to enable freedom of expression and participation in society by protecting against and providing remedies for speech that is harmful to marginalized groups. Further, the Canadian Human Rights Commission and Tribunal do more than act as a gatekeeper for appropriate complaints and as adjudicator in specific cases that meet the threshold for hearing. The Commission also plays an important role in education and advocacy in the public interest. If society has a commitment to combat hate speech, then the Canadian Human Rights Commission and Tribunal should be enlisted to help achieve that goal.  To invoke Keegstra once again, combatting hate speech requires multiple tools, and a human rights response is one of them. We also note that to be a more useful tool, human rights commissions and tribunals require enhanced resources to avoid backlogs and delays.

Second, and relatedly, the Ontario Human Rights Commission has commented that “rights on paper alone are not enough.” They must be administrable. At the time section 13 was repealed, alternative avenues through non-state actors were identified to combat hate speech. This is problematic for a variety of reasons. First, in 2021 this operates as an outsourcing of human rights regulation of speech to private platforms, which not only set the rules for speech in their terms and conditions, but create the framework for their adjudication. Let’s not kid ourselves. We need these platforms to moderate content – not only potentially illegal content such as hate speech but the great swath of legal but harmful content that can circulate in these spaces. And these platforms have capacity to take down content at speed, which no court or tribunal can replicate. The video streaming of the attack at Christchurch mosque was quickly detected and hashed (essentially given a digital fingerprint) by Facebook to enable it to block the video from upload or take it down.

The problems in the context of section 13 are that every platform has a different tolerance for extremist content, hate speech is notoriously difficult to pin down as such, and few platforms apply human rights principles in their operations (see here). This is not to say that platforms do not have a role to play. Rather, we argue that the existence of private parties with the capacity to regulate speech cannot be the reason for removing a public legal avenue to adjudicate rights.

Further, it is the obligation of states to protect human rights, and Canada is arguably failing to fulfil its duties around hate speech. For example, the International Convention on the Elimination of All Forms of Racial Discrimination (CERD), which Canada has ratified, commits that state parties “condemn all propaganda and all organizations which are based on ideas or theories of superiority of one race or group of persons of one colour or ethnic origin, or which attempt to justify or promote racial hatred and discrimination in any form, and undertake to adopt immediate and positive measures designed to eradicate all incitement to, or acts of, such discrimination” (article 4). The Committee on the Elimination of Racial Discrimination, which monitors state compliance with CERD, noted in its 2017 report on Canada that it had concerns about the rise of racist hate speech in this country and about Canada’s implementation of appropriate anti-discrimination provisions (at para 13). Although the scope of the Committee’s recommendations focused on hate crimes, the spirit of its report is certainly in keeping with the reintroduction of human rights protections against racist (and other) hate speech in Canada. This is especially so when we consider the limits on the criminalization of hate speech from a constitutional perspective.

Conclusion

As we finalize this post it has been reported that eight people were killed in Atlanta on March 17, most of them Asian American women who worked in massage parlours. This attack draws attention to the increase in instances of hate crimes against members of Asian communities in 2020. It is a reminder that hate speech can lead to violence. Hate speech has an often brutal impact on members of marginalized groups – often those who experience intersecting inequalities – and we believe it should not be tolerated by a society that has commitments to equality and anti-discrimination norms.

We call for a new hate speech provision in the CHRA so that online hate speech is clearly covered by federal human rights legislation. Expression engages not only freedom, but equal participation in society. In our view, this balance can be meaningfully achieved by basing a provision on the holdings in Taylor and Whatcott. If those decisions are followed, the new provision should continue to include the freedom to “shock, offend and disturb”, which is also consistent with the protection of individuals and groups who want to draw attention to the cruel harms of racism, colonialism and other forms of oppression. But it should be contrary to human rights legislation in every Canadian jurisdiction to engage in hate speech against members of protected groups – in other words, to expose them to the unusually strong and deep-felt emotions encompassed by the constitutionally protected definition of hatred and contempt. The regulation of hate speech requires multiple tools, and human rights law is a key tool in the toolbox given its remedial focus, potential systemic impacts, and relative accessibility of procedures – although as noted above, human rights systems also require better resourcing, and we urge the federal government to heed this advice as well.


This post may be cited as: Emily Laidlaw & Jennifer Koshan, with Emma Arnold-Fyfe, Lubaina Baloch, Jack Hoskins, and Charlotte Woo, “Protection Against Online Hate Speech: Time for Federal Action” (March 19, 2021), online: ABlawg, http://ablawg.ca/wp-content/uploads/2021/03/Blog_EL_JK_Online_Hate_Speech.pdf

To subscribe to ABlawg by email or RSS feed, please go to http://ablawg.ca

Follow us on Twitter @ABlawg

About Emily Laidlaw

Associate Professor. Member of the Alberta Bar. Please click here for more information.
This entry was posted in Equality, Equity, Diversity, and Inclusion, Human Rights, Internet Law. Bookmark the permalink.