By: Emily Laidlaw
PDF Version: Pritchard v Van Nes: Imposing Liability on Perpetrator Zero of Defamatory Facebook Posts Gone Viral
Case Commented On: Pritchard v Van Nes, 2016 BCSC 686
Justice Saunders of the British Columbia Supreme Court recently decided Pritchard v Van Nes, 2016 BCSC 686 (Pritchard) concerning the liability of individuals not only for their Facebook posts, but how their “friends” react to these posts, whether through comments, sharing or otherwise distributing the post. This case asks: if you start the fight, are you liable for the pile-on? The analysis of the Court could have significant repercussions concerning the uneasy balance between the right to reputation and freedom of expression, arguably tipping the balance in favour of reputation in stark departure from recent Supreme Court of Canada cases on defamation (see Crookes v Newton, 2011 SCC 47, Grant v Torstar Corp., 2009 SCC 61, WIC Radio Ltd. v Simpson, 2008 SCC 40).
Facts
Pritchard involves a claim for nuisance and defamation, although this post will only focus on the defamation aspect of the claim. The plaintiff obtained default judgment in June, 2014, and the reasons for judgment here concern the plaintiff’s application for a permanent injunction, assessment of damages and special costs.
Pritchard and Van Nes are neighbours in Abbotsford and have the kind of neighbourly relationship we all fear. Their dispute dates back to 2011 when the Van Nes family installed a fish pond with a waterfall in their backyard. The plaintiff testified that the noise from the fish pond was loud, making sleep difficult for him and his wife. The plaintiff complained to the defendant and this sparked the deterioration in their relationship. Several other incidents were reported – the defendant dog defecating in the plaintiff’s yard, loud parties, blocked driveways – all of which is more relevant to the nuisance claim than defamation, but it certainly contextualises the tension that led to the Facebook posts by the defendant.
On June 9, 2014, the defendant posted a picture on Facebook of the plaintiff’s backyard with the following message:
Some of you who know me well know I’ve had a neighbour videotaping me and my family in the backyard over the summers…. Under the guise of keeping record of our dog…
Now that we have friends living with us with their 4 kids including young daughters we think it’s borderline obsessive and not normal adult behavior…
Not to mention a red flag because Doug works for the Abbotsford school district on top of it all!!!!
The mirrors are a minor thing… It was the videotaping as well as his request to the city of Abbotsford to force us to move our play centre out of the covenanted forest area and closer to his property line that really, really made me feel as though this man may have a more serious problem. (para 22)
There was no video camera (para 23) and the mirror referred to wasn’t for spying on the defendant, but rather was a feng shui ornament (para 19). The Court accepted the plaintiff’s evidence that he never made a complaint to the municipality about the swing set (para 20). The defendant’s post prompted 48 comments by 36 friends and 9 further comments by the defendant (para 23). The comments, including those of the defendant, referred to Mr. Pritchard as “a ‘pedo’, ‘creeper’, ‘nutter’, freak’, ‘scumbag’, ‘peeper’ and a ‘douchebag’” (para 24). Mr. Pritchard’s name, occupation and work were all identifiable in the conversation (para 73). This was, in effect, a conversation implying that the plaintiff was a paedophile and unfit to teach, which defamatory meaning Justice Saunders accepted (para 74). The defendant described her post and comments as “venting” (para 41), but the audience of her venting was sizeable. She actively conversed with 36 friends, which would have been visible to her 2059 friends (para 75), and further visible to the countless other friends of friends that see the posts their friends react to (thanks to the new Facebook upgrade from “liking” to “reacting” in 2016). The defendant also set her privacy settings to public so anyone could see the comments posted about the plaintiff. The Facebook post was up for 27.5 hours before the plaintiff deleted it (para 31), but continued to be on timelines through friends who had liked or commented on the post (para 32).
Mr. Pritchard is a Middle School music teacher. One “friend” of the defendant, Rick Parks, posted a suggestion that the defendant send the picture to the plaintiff’s principal, and advised that he had shared the defendant’s post on his timeline (para 25). Mr. Parks later sent an email to the principal with an email attachment of the image and amongst the allegations made against the plaintiff, commented “I think you have a very small window of opportunity before someone begins to publicly declare that your school has a potential paedophile as a staff member. They are not going to care about his reasons – they care that kids may be in danger.” (para 26).
The injury to Mr. Pritchard’s reputation has been palpable (paras 33-38). Before these posts he was an active member of his workplace and community, working in an extra-curricular capacity with students in concert bands, and by all accounts growing the school music program significantly (para 10). After the Facebook post, Mr. Pritchard no longer enjoys teaching, withdrew from school programs, guards his interactions with students and dreads any public performances. His employment opportunities elsewhere are now limited. One student was removed from his music programs, while a few neighbours have made comments, such as “I thought I knew Doug, but I guess I didn’t know the other side of him” and “You know, your husband could get fired” (para 37).
That the original post by the defendant was defamatory is straight-forward. This blog post will not focus on those aspects of the judgment. Rather, it is the Court’s analysis of the concept of publication in order to impose liability on the defendant for the posts and shares of her “friends” that requires unpicking. This is particularly pressing as the sting of the libel in this case is something all too familiar online – the sting isn’t just what the defendant posted, but what all her friends posted, some just cruel, some defamatory in their own right. It was a virtual lynching and unfortunately the parameters of defamation law fit uneasily with such a scenario. This case highlights the need for more comprehensive defamation reform (stay tuned for the work of the Ontario Law Reform Commission).
Issues
The Court analysed two separate, but related issues concerning publication. First, whether the defendant is liable for re-publication of the defamatory post (and comments) through her “friends” sharing the post, her friends of friends seeing the post in their timeline, and for Mr. Parks’ letter to the principal. Second, whether the defendant is liable for any of the defamatory comments by her “friends” in response to her post.
The Meaning of Publication
Publication is central to defamation law. In order for a comment to be defamatory it must be published, meaning it must be communicated to at least one person other than the plaintiff. It is clear that the defendant published her own posts about the plaintiff. What is less clear is whether she published the defamatory comments of her “friends” commenting on her post, sharing it and otherwise disseminating it in the wide platform that is Facebook.
Publication must be deliberate in the sense that they have to “knowingly be involved in the process of publishing the relevant words” (see Crookes para 21 drawing from Bunt v Tilley, [2006] 3 All ER 336 (QB)). Thus, those that play a “passive instrumental role” (Bunt at para 23) are not liable. What is passive? Under UK case law, on which most Canadian defamation cases refer, once you become aware of the defamatory content (actual or constructive knowledge), and have the power to remove the content from circulation and fail to do so, you are interpreted as liable for the continued publication of the work. You weren’t originally a publisher, but you become a publisher for the continued circulation (see Carter v BC Federation of Foster Parents Association, 2005 BCCA 398). This is the concept of knowledge and control that underpins the common law governing publication. A seminal case is Byrne v Deane, [1937] 1 KB 818 where a defamatory notice was posted on a golf club notice board. The club directors were aware of the notice and failed to remove it amounting to publication of the notice.
This indirect involvement in the publication process is what absolves innocent disseminators of liability for publication. This defence protects vendors, librarians, agents etc. from liability for publishing the defamatory content as long as they did not know, nor had no reason to suspect, that the publication was defamatory. In the UK, this defence has been codified in s. 1 of the Defamation Act 1996 (and the new website operators defence in s. 5 of the Defamation Act 2013). In the internet context, this aspect of defamation law has increasingly been used (and strained) to address the liability of online intermediaries: online service providers, such as Facebook, Twitter and Reddit, internet service providers (ISPs) and search engines (mainly Google) for content that, while they make available in some way, they did not create. A case like Pritchard raises a more complicated question about the liability of individuals who are both the content creators in posting defamatory content, and hosts or facilitators of defamatory posts of others. The case law that has developed to address online intermediaries is helpful, but does not seamlessly apply to this context.
In the background are the liability frameworks that have been created in Europe and the USA to provide safe harbours from liability for online intermediaries. This concept of passivity underpins both frameworks. Under the EU’s E-Commerce Directive 2000/31/EC, three categories of information service providers are created: mere conduits, caches or hosts. The risks of losing the safe harbour are greatest for hosts of content. Similar to the common law governing publication, if the host knows or is aware it is hosting unlawful content it is obligated to remove or disable access within a reasonable period of time. The USA, in contrast, has codified broad immunity for intermediaries under s. 230 of the Communications Decency Act, 47 USC.
Republication is another issue. Generally, an individual is not liable for republication of their defamatory words by a third party. There are exceptions. The focus is on whether the original publisher had responsibility over the republication in the sense of whether they had control over the republication, authorised it or participated in it. Relevant here is that a defendant may be liable for republication where “the repetition was the natural and probable result of his or her publication” (Pritchard at para 73). This imports foreseeability to the determination of responsibility for republication: if a reasonable person would have expected that the defamatory comment would be republished, then the defendant is liable for the republication (see Brown on Defamation (looseleaf), 7.5(4)). What is a social networking site like Facebook to a concept such as republication? The space has the social informality of pub talk with the permanence of print. And things go viral, like the post at issue in Pritchard, although in the grand scheme of things this post did no go viral the way other cases of shame and abuse have (think Zoe Quinn and Anita Sarkeesian). In those cases the attacks were worldwide rather than local, although some of the most damaging harm to reputation occurs at a local level, as experienced by Pritchard.
Republication
The Court in Pritchard held that the defendant implicitly authorised republication of the defamatory post (including comments) via sharing of the post by others, visibility of the post by non-friends, and publication of the letter by Mr. Parks to the plaintiff’s principal. I suggest two analytical flaws in the Court’s reasoning. First, the Court found implicit authorisation in the sheer fact that the defendant used Facebook:
In my view the nature of Facebook as a social media platform and its structure mean that anyone posting remarks to a page must appreciate that some degree of dissemination at least, and possibly widespread dissemination, may follow. This is particularly true in the case of the defendant, who had no privacy settings in place and who had more than 2,000 “friends”. The defendant must be taken to have implicitly authorized the republication of her posts. (para 83)
By using Facebook, therefore, any post by an individual puts them at risk for liability for what their friends do with it. There is some logic to this argument, particularly to address the problem of the type of mob attacks were are seeing online; an individual takes the risk by using Facebook and posting the defamatory content and should be held responsible for any republication that results. Support for this approach is the principle that an individual is liable for any republications that are a natural and probable result of the original publication, which is arguably the case for any posts on social media. The concern is the unpredictable nature of this. It might be foreseeable that someone might share your post as that is a staple of Facebook interactions, but it is arguably unforeseeable that someone would do anything else with it.
More troubling is the liability imposed on the defendant for the republications by Mr. Parks, because it imposes a duty to speak up. The Court concluded that the Mr. Park’s comment that he had shared the post on his page and suggestion “why don’t we let the world know” (para 88) affixed the defendant with knowledge that Mr. Parks intended to republish the post. The defendant’s failure to speak up – her silence– led the Court to conclude that she authorized any republication by Mr. Parks, including the email he sent to the principal. Liability for failure to take positive steps is a component of publication in defamation law, albeit a narrow one, and it is dependent on knowledge and control, but it is rarely if ever invoked to impose an obligation to speak. There are potentially limiting factors in the judgment – the fact that the defendant had no privacy settings, and 2000+ friends.
More generally in tort law, liability for a failure to act is only imposed in exceptional circumstances, and for good reason. It is a difficult duty to fulfil. In this case we have the benefit of hindsight – Mr. Parks did act by sending the email to the principal – but for users of social media and the hyperbole and jest involved, it won’t always be clear. This is important, because at what point did the interaction crystallize to a duty to act? Or to be specific, a duty here to “warn Mr. Park’s not to take measures on his own” (para 90). What if he had said “let’s shame him” or made a comment that was borderline joking, such as “we need to bring back the scarlet letter for this guy”. This also requires a peculiar form of action, namely imposed speech. Would the defendant replying “No, no, no, don’t do anything, I’m just venting” have been sufficient to remove liability? What if her comment was not convincing? What timeline is considered reasonable to expect a reply? It is one thing to impose liability for a failure to act to, for example, remove the defamatory content (as seen with intermediary liability). It is another to impose liability for a failure to speak up – this imposes a speech requirement with specific content characteristics. The latter duty would be difficult to fulfil.
Liability for “Friends’” Comments
The Court then analysed whether the defendant should be liable for the comments posted by her “friends” on her Facebook post. As the Court noted, this is an “emerging legal issue in Canadian law.” (para 91) I suggest that the Court erred in its interpretation of the law and in its application. Of particular concern is the Court’s imposition of liability on the defendant on the basis that she ought to have known her friends would post defamatory comments.
The Law
The Court drew, in particular, from the reasons of Deschamps J., concurring in the result, in Crookes, concerning liability for sharing hyperlinks that contain defamatory content. The majority in Crookes created a bright-line rule, wherein only where the act of sharing the hyperlink repeats the defamation is it publication. The simple act of sharing a hyperlink, without more, is not publication. This was rooted in an analysis of the importance of the internet to freedom of expression and the critical role of hyperlinking to its use. In the joint concurring judgment McLachlin C.J. and Fish J. argued against the bright-line rule, suggesting rather, that in some instances, where there was endorsement or adoption of the defamation, it is publication. Deschamps J., rather, argued that hyperlinking should not be excluded from the publication rule, arguing that there might be publication if an individual “makes defamatory information readily available to a third party in a comprehensible form” (Crookes at para 59 discussed in Pritchard at para 94). If a case departs from the majority in Crookes, I suggest the dissenting judgment is preferable as the concept of readily available is too woolly to be of practical guidance.
The internet law cases relied on by Deschamps J. concern intermediary liability – liability imposed on the hosts of a website or platform on which a third party made the defamatory comments. The cases, in assessing knowledge and control ask such questions as follows. Was the defendant notified of the defamatory content? Did the defendant have control over the defamatory content such that they could have and should have removed it? Typically, this might involve the host of an online forum who is notified a user posted defamatory comments and fails to remove it. This is a complicated area of the law, namely because it is not so simple to impose liability post-notification for failure to act. When is notice deemed sufficient? How detailed does this notice need to be? Is actual notice required or does constructive notice count? When is the party deemed to know about the defamatory comment? Is it upon notice alone, or does the party need to have evidence of unlawfulness before action is required? What if there is conflicting evidence? There is some unease going down this road, because it forces the party into a quasi-judicial capacity, assessing the merits of a defamatory claim and then making a decision with powerful results: the information remains accessible or is removed from circulation. This directly implicates the right to receive and impart information. Europe has wrestled with these issues for over fifteen years, while Canada is in a nascent stage of development, although caselaw suggests a similar approach is emerging but inconsistent in its application (see Carter and Crookes; but see Baglow v Smith, 2015 ONSC 1175). The complications of these issues were smoothed out in the analysis in Pritchard by failing to engage with the more difficult aspects.
The Court’s error is in what it frames as the ‘passive instrument test’ (para 107). Passivity is best understood as the instrument through which deliberateness is relevant: one must show a deliberate act in making the content available, and the more passive the activity the harder it is to find deliberateness. One way that deliberateness is found is through notice – once a party knows it is making available content that is defamatory, and it has the power to remove it, it can be found to be deliberately choosing to continue publishing it. As to the passivity, the Court cites the case of Weaver v Corcoran, 2015 BCSC 165 (paras 102-104), a case that involved, in part, whether the National Post should be liable for posts by its readers in the comments section. The Court noted the passive role played by the the newspaper concerning the comments section, and the unrealistic expectation that it would pre-approve every comment before posting. Since the newspaper removed the offending comments they were not publishers. If they had failed to remove the comments, the Court would have concluded the National Post was liable for continued publication of the defamation: “[o]nce the offensive comments were brought to the attention of the defendants, however, if immediate action is not taken to deal with these comments, the defendants would be considered publishers as at that date.” (para 104 citing Weaver).
Another way that passivity is assessed is through the activity. Some activities are too passive for there to be liability. Most of the UK case law considers, or is influenced, by the categories created in the the E-Commerce Directive, discussed above, wherein a safe harbor is provided, at varying levels of protection, depending on whether the activity is of a mere conduit, cache or host. For example, the Court considered the case of search engines, a complicated and developing area of the law. Not all of what Google does in relation to its search function is passive. It depends on the activity. Is the complaint regarding the auto-complete function? Search snippets? Links to articles in its search results? Most are automated, but what are Google’s obligations once it is notified of a problem? In Metropolitan International Schools v Google, [2009] EWHC 1765 (QB) discussed by the Court in Pritchard (para 106). Eady J. investigated the liability of Google for the snippets that are returned in search results. Eady concluded, amongst other things, that a search engine is not a publisher at common law, whether before or after notification of a defamation claim and is not analogous to a website or ISP, the search engine having no input over search terms entered and the process of publication being automated. The peculiarity of search engines bears repeating. It invites an aspect of automation that is different than the situation here, and in fact, is different from ISPs and hosts. Automation goes to passivity, but is not as helpful for a case such as Pritchard, and these differences, I suggest, should have been more carefully teased-out in the analysis.
The Court summarized the law as follows:
In summary then, from the forgoing law it is apparent that Carter, Weaver, and Niemela, consistent with Deschamps J.’s reasons in Crookes, provide support for there being a test for establishing liability for third party defamatory material with three elements: 1) actual knowledge of the defamatory material posted by the third party, 2) a deliberate act that can include inaction in the face of actual knowledge, and 3) power and control over the defamatory content. After meeting these elements, it may be said that a defendant has adopted the third party defamatory material as their own. (para 108)
I suggest this confuses the test. Many of the cases considering publication in the digital age have involved intermediaries rather than the case here, where the defendant kick-started a defamatory conversation. This makes such intermediaries analogous to editors and publishers of newspapers, for example, who would be liable for a publication but would not necessarily know of the defamatory content. This is different than individual interactions. In fact, this difference was noted in the oft cited Bunt, but has been insufficiently teased out in Canadian jurisprudence:
Of course, to be liable for a defamatory publication it is not always necessary to be aware of the defamatory content, still less of its legal significance. Editors and publishers are often fixed with responsibility notwithstanding such lack of knowledge. On the other hand, for a person to be held responsible there must be knowing involvement in the process of publication of the relevant words. It is not enough that a person merely plays a passive instrumental role in the process. (para 23)
Three things should be clarified: passivity, knowledge and control. All of this goes to the question of whether the defendant has deliberately published the defamation. Rather, I suggest the test is:
- Did the defendant know, or should have known, of the existence of the defamatory content?
- Was there a request to remove the material, or did the defendant’s behaviour otherwise show consent or approval of the continued publication?
- Did the defendant have control over the content? If so, did the defendant fail to remove the content within a reasonable period of time?
This test applies uneasily to the circumstances of this case, as seen in the reasoning of the Court, because there was no request to remove the content as seen in most cases involving hosts online. Rather, it is more the question of whether the defendant, upon knowledge of the defamatory posts by her friends, approved their continued publication. The Court concluded:
I find as a matter of fact that Ms. Van Nes acquired knowledge of the defamatory comments of her “friends”, if not as they were being made, then at least very shortly thereafter. She had control of her Facebook page. She failed to act by way of deleting those comments, or deleting the posts as a whole, within a reasonable time – a “reasonable time”, given the gravity of the defamatory remarks and the ease with which deletion could be accomplished, being immediately. She is liable to the plaintiff on that basis. (para 109)
What is knowledge in such a situation? When did the defendant know that her friends were posting defamatory comments? What is immediately, especially in circumstances where knowledge isn’t tied to notice?
Ought to Know Test
The most troubling aspect of the reasoning in Pritchard is the liability imposed on the defendant, because she should have anticipated what was going to happen. This rubs close to making the defendant responsible for starting the pile-on. The Court stated:
Furthermore, I would find that in the circumstances of this case there ought not to be a legal requirement for a defendant in the position of Ms. Van Nes having actual knowledge of the existence of defamatory comments by her “friends” as a precondition to liability. The circumstances were such that she ought to have anticipated such posts would be made. I come to this conclusion for two reasons: the nature or structure of a social medium platform, and the content of Ms. Van Nes’ contribution to the posts. (para 110)
It is concerning to the constitutional value of freedom of expression to suggest that liability should be imposed, because you failed to anticipate a conversation would go south, even more so in defamation law where most of the burden is on the defendant to refute the defamatory claim. Liability for publication attaches for just starting the conversation. While the case here is simpler in the sense that the defendant posted clearly defamatory remarks, the implications are not so simple for the muddy waters in which humans communicate. Let me elaborate.
The Court based its conclusion on two factors: the nature/structure of the social media site, and the defendant’s own posts. Part of the difficulty is the analogies drawn. The Court rightly differentiated the defendant’s behaviour from that of a search engine, host of an online forum, or in the case of Crookes, both the host and speaker on online forums, and accurately framed the defendant as having set in motion events with her post. What is insufficiently articulated in the case is what this means for the applicable law, as the defendant’s activities blur the line between intermediaries and content providers, a division that is critical in the analysis of responsibility for publication. The Court resolved it using the ought to know test, but as will be discussed, this is insufficient to the task of balancing the values of free speech and reputation, and does not resolve the intermediary/content provider distinction. The Court’s analysis on this point, although lengthy, is worth repeating.
[111] A user of a Facebook page is not in the same position as the defendant Newton in Crookes, the defendant Federation in Carter, or the respondent Google Inc. in Niemala. Those parties were only passively providing a platform or links to defamatory material. In the present case the entity in the analogous position would be Facebook, Inc., the owner of the software that creates the pages and the servers on which the content is stored. The user hosting a page of a social medium such as Facebook, on the other hand, is providing a forum for engagement with a circle of individuals who may share some degree of mutual familiarity. As noted above, the social nature of the medium is such that posts about concerns personal to the user may reasonably be expected to be discussed by “friends”.
[112] What these factors entail is that once she initiated events through having made an inflammatory post concerning a matter of personal concern, Ms. Van Nes ought reasonably to have expected her “friends” to make sympathetic replies. The “friends”’ comments were not unprovoked reactions; they were part of a conversation. And then, when they did comment, Ms. Van Nes – far from being the passive provider of an instrument for comment – continued as an active participant through making replies, prompting further comment. Those replies added fuel to the fire, compounding the chances of yet more defamatory comments being made.
[113] In other words, I would find that the nature of the medium, and the content of Ms. Van Nes’ initial posts, created a reasonable expectation of further defamatory statements being made. Even if it were the case that all she had meant to do was “vent”, I would find that she had a positive obligation to actively monitor and control posted comments. Her failure to do so allowed what may have only started off as thoughtless “venting” to snowball, and to become perceived as a call to action – offers of participation in confrontations and interventions, and recommendations of active steps being taken to shame the plaintiff publically – with devastating consequences. This fact pattern, in my view, is distinguishable from situations involving purely passive providers. The defendant ought to share in responsibility for the defamatory comments posted by third parties, from the time those comments were made, regardless of whether or when she actually became aware of them.
I suggest that online conversations do not work the way the Court suggests in Pritchard, at least without significant risk of liability for participants. Let me give an example. Let’s say I am a journalist and I post a story that is provocative, designed to stir conversation in the comment section. There is an important social role here, but there is a risk under the analysis in Pritchard, that by starting the conversation you should have known that posts would be made. You are liable on this ought to know basis. As another example, what if I post a vent on Facebook about, for example, the birth of my child and my unhappiness with the care I received at the hospital. I might post it in a tempered tone, but children and birth being a hot button issue, friends post vicious, defamatory posts about the hospital alleging all sorts of behaviour, made up statistics on injuries to the child, etc. What about if the topic is religion? Schooling? Politicians? So much would be out-of-bounds to even begin to discuss out of fear of liability because a court deems you liable not for a failure to remove the content upon notice (the traditional test), but because you should have known.
The Court notes a similar New Zealand case that rejected the type of liability imposed here (Wishart v Murray, 2013 NZHC 540, at para 114 of Pritchard]. The Court of Appeal in Wishart was rightly concerned, among other things, that the ought to know test was too uncertain in the way it would be applied, (see Pritchard para 116), and that imposing liability was inconsistent with the intentional nature of the tort. The Court in Pritchard, in rejecting Wishart, stated that foreseeability already exists in tort law concerning republication and therefore ”the integrity of defamation as a separate tort” (para 117) is not harmed by extending it to the issue of third party comments. This does not logically follow. Defamation law is an intentional tort, and the suggested extension of the law to include foreseeability regarding third party comments blows open the liability framework rather than develops it incrementally as the Court suggests it has done. Foreseeability as to republication is limited to a repeat of the specific defamation in question and in effect holds the defendant liable for the spread of the information in predictable ways. Foreseeability as to what third parties might themselves say is wholly unpredictable and the chilling effects more evident.
Conclusion
The assault on Pritchard’s reputation was brutal and the case against Van Nes for her comments was relatively clear. However, in an effort to provide compensation for the whole of the attack on his reputation, given only Van Nes was sued, led to concerning implications for the balance between free speech and reputation in cases going forward. Rather, it indicates we have a lot of work to do to reform defamation law. In particular, this case highlights three areas where we need work. The law governing third party liability needs to be developed and clarified (including the law governing publication, and the differences between intermediaries and content providers). Further, one of the limitations in this case is the fact that only Van Nes was sued, and in fact, the case against most of the commentators would have been difficult to succeed (save Mr. Parks). This raises one of the more fundamental problems of defamation in the digital age –lawsuits are complicated and cumbersome. What is needed is more small-scale private dispute resolution, which would have more cheaply and easily addressed the issues without bending the law to achieve a just result.
One final aspect that should be examined more broadly is this kind of mob attack. I suggest the Court was correct to look at the social media space as a whole (structure, distribution, and publication) and the nature of the defamatory conversation (because it in effect was a conversation with posts and comments and shares), but the analytical framework is under-developed. We do have a real problem with mobs online and this stretches beyond defamation law to privacy, harassment, revenge pornography, and other abuse and bullying. In my work I have sought to interrogate the nature of this mob to address the kind of law reform needed to tackle the serious harm suffered by the victims. However, this requires a larger wholesale reform of the law.
In an ideal world the Plaintiff would have been in a position to successfully sue the defendant, Mr. Parks, and any of the other individuals who posted defamatory content of their own on the defendant’s comments section. However, this is not realistic. Litigation would be costly, defendants often difficult to trace (although not in this case), and damages minimal. In 2012, Lord McAlpine, a former politician in the Thatcher government, was falsely linked with with a Newsnight story alleging sexual abuse by a Conservative politician. The rumour that he was a paedophile spread on Twitter via thousands of posts. McAlpine successfully sued some of the more prominent Twitterers with a large number of followers (see here and here), and offered to settle with any individual who had less than 500 followers if they made a donation to a charity (see here). However, most people aren’t McAlpine. And in the teaching profession, an accusation like paedophilia “sticks to you like tar” to use Monica Lewinsky’s recent expression for the shaming she endured all those years ago. Further, this still misses the viral nature of online republication, and the ease and passivity with which individuals re-share or see the comments of friends of friends on their timeline. The Court elected to impose liability on the one who started it. The problem is that this stretches defamation law beyond its logical structure.
To subscribe to ABlawg by email or RSS feed, please go to https://ablawg.ca
Follow us on Twitter @ABlawg
Running parallel to this, the issues of repetition and foreseeability in regard to third party comments are addressed in the latest iteration of Wishart v Murray:
http://www.nzlii.org/cgi-bin/sinodisp/nz/cases/NZHC/2015/3363.html
In the absence of wholesale reform of the law, why should the court not impose liability on the one who started it as absent this spark the firestorm of hatred propagated by social media would not have occurred.