Authors
Lara Guest
With remote business gaining significant traction in the last few years, many companies have looked to transform and advance their digital strategies, including through online commerce enhancements and digital offerings in the media industry. While new opportunities for industry participants have emerged with the expanding digital reach, new areas of risk are emerging.
As a constantly evolving centre for communication and expression, the internet continues to pose challenges for the Canadian justice system.
The combination of the wide reach of online platforms to readers across the world, and users’ ability to publish content anonymously provides an opportunity for the exchange of ideas to flourish worldwide. At the same time, the risk of potential harms is virtually limitless.
In an attempt to strike the right balance, Canadian courts have grappled with the liability of internet intermediaries for content distributed by third parties on their platforms. Over the past year, several novel claims have raised important questions, and the resulting decisions have begun to shape the law. The trends that have emerged suggest that there may be a greater willingness by Canadian courts and Parliament to regulate online platforms.
This article summarizes the recent evolution of online platform liability in Canadian law. We begin by reviewing the existing case law which, in certain specific circumstances, recognized an online platform’s liability for a third party’s defamatory content. While the case law continues to evolve, the courts have regularly acknowledged that some degree of liability for online platforms is warranted.
We then turn to recent developments in both case law and proposed legislation that increases the risks for online platforms, not only in defamation but also breach of privacy, hate speech, breach of human rights legislation and breach of contract. Parliament appears to be taking a very different direction than the United States, which provides online platforms with broad-ranging immunity from liability. Section 230 of the U.S. Communications Decency Act (CDA 230) bars online platforms from being considered a “publisher or speaker” of third-party content and establishes that a platform can absolve itself from liability by moderating its content in good faith1. Canada has no such legislation.
Finally, to address these emerging potential risks, we provide practical strategies to mitigate liability.
Starting in 2005, a series of cases recognized that in certain circumstances online platforms could be liable for defamatory expression posted by third parties. A significant motivator for imposing liability was the threat to reputational interests through anonymous postings left indefinitely in cyberspace. In Carter v. B.C. Federation of Foster Parents Assn., the British Columbia Court of Appeal described the policy rationale for recognizing liability where a defendant fails to remove offensive material in a timely manner as follows:
[I]f defamatory comments are available in cyberspace to harm the reputation of an individual, it seems appropriate that the individual ought to have a remedy. In the instant case, the offending comment remained available on the internet because the defendant respondent did not take effective steps to have the offensive material removed in a timely way2.
An Ontario court agreed. In Baglow v. Smith, the Ontario Superior Court found the operators of an online message board could be liable for defamatory comments which had been posted on the message board by a third party. The operators of the message board conceded that they were publishers by disseminating third-party content, but argued they were a “passive instrument” by merely making the comments available. The Court rejected that argument. The operators had notice of the impugned posts but refused to delete them. Furthermore, the posters on the message board were generally anonymous, leaving a plaintiff with no other recourse3.
More recently, in Pritchard v. Van Nes, the British Columbia Supreme Court considered whether someone who posted defamatory content on Facebook is liable for third-party defamatory comments on the post. The Court recognized that the legal question of liability for third-party defamatory content was still an emerging issue. It proposed a three-part test for establishing liability for a third-party’s defamatory post4. A plaintiff must demonstrate that the defendant:
None of Carter, Baglow nor Van Nes definitively holds an online platform liable for the defamatory posts of a third party. However, the law is likely to evolve along the lines expressed in Pritchard: a party that has both knowledge and control is at risk of not being considered merely a passive instrument.
While the courts have not had a recent opportunity to weigh in on the merits, recent procedural decisions (outlined below) suggest a somewhat inconsistent approach. Meanwhile, Parliament has begun to take steps toward regulating online platforms.
Four procedural decisions from the past year demonstrate the courts’ inconsistent approach to online platform obligations regarding third-party content. However, despite the inconsistencies, no court has held that online platforms do not have some measure of responsibility for third-party content posted on their websites.
In early 2021, the British Columbia Supreme Court released its decision in Giustra v. Twitter Inc6, which we summarized here. In short, Twitter challenged the Court’s jurisdiction to hear a defamation action against it, asserting that the claim should proceed in California where Twitter has its headquarters. It argued that it should not be expected to defend defamation actions in every jurisdiction in which a tweet can be accessed and where the plaintiff has a reputation to protect. The Court disagreed. While it expressly declined to comment on the substantive merits of the claim, it held that the law in Canada with respect to online platforms is unsettled. Moreover, the Court relied on the fact that the allegedly defamatory tweets had been brought to Twitter Canada’s attention and the plaintiff had a significant reputation in British Columbia, which was sufficient to allow the claim to proceed to the merits. The Court also rejected Twitter’s arguments that California was the more convenient forum, because all the parties agreed that the claim would be dismissed in California pursuant to CDA 230.
Shortly after Giustra v. Twitter Inc., the Québec Superior Court released a decision in Lehouillier-Dumas c. Facebook inc.7, in which it declined to authorize a class action in defamation against Facebook on behalf of individuals who had been named in a Facebook group as alleged sexual abusers. The plaintiff alleged that Facebook had an obligation to remove content that was “potentially” defamatory. The Court rejected that argument, holding that Facebook’s policies required it only to remove content that is illegal or which a court had deemed defamatory, but not merely offensive or unpleasant. The plaintiff had provided Facebook with insufficient information to determine whether the content defamed him. As a result, the court concluded that this did not trigger any obligations.
While Lehouillier-Dumas may provide a route to limit liability for online platforms, the principle on which it was decided may prove to be narrow: the plaintiff had failed to demonstrate the defamatory nature of the content to Facebook, meaning that Facebook had no obligation to act. Lehouillier-Dumas also demonstrates the utility of clear and consistent terms of use in expressly limiting a platform’s liability for the acts of third parties.
Outside the defamation context, we have also seen cases this year against online platforms for both their content promotion and removal policies:
Prior to the 2021 federal election, Parliament’s focus in regulating internet platforms related to eliminating five types of illegal content: child pornography, terrorist content, incitements to violence, hate-speech and the non-consensual sharing of intimate images. In July of this year, the Ministry of Heritage published a technical paper which presented a proposed framework to regulate online platforms with respect to those harms10. The framework includes:
Parliament also demonstrated a desire to regulate the communication that occurs on online platforms. Bill C-10 amended the Broadcasting Act to apply to online media and allowed the CRTC to regulate broadcasts on online platforms, including requiring a minimum level of Canadian-produced content on the platforms. This bill passed the House, but had not yet passed the Senate when Parliament was dissolved for the 2021 election.
Canada has not enacted legislation like CDA 230. However, on July 1, 2020, the United States-Mexico-Canada Agreement came into force. USMCA requires that Canada, Mexico and the United States provide online platforms with broad protection against liability for hosting third-party content. It does not go as far as CDA 230, as it does not prevent platforms from being considered a “publisher or speaker”, but it does bar an online platform from being treated as the content provider “in determining liability for harms”. Commentators have suggested that this distinction leaves open the ability to enforce equitable remedies against online platforms11.
It is not clear what impact the USMCA provisions will have on online platforms in Canada. International treaties are only enforceable when they are incorporated into domestic law, and the USMCA online platform provisions have not been incorporated into Canada’s implementing legislation12. In its statement on implementation13 the Canadian government advised that the provisions do not affect the ability to impose measures to address harmful online content or enforce criminal law. It also advised that the issue will be primarily addressed through judicial interpretation of legal doctrines, including defamation. While it is unclear exactly how courts will treat this issue in the absence of an actual change to domestic law, it is at least possible that it will have an impact on cases against online platforms like those in Baglow and Pritchard.
In the absence of legislation which defines or reduces the scope of an online platform’s liability for the content on its website, online platforms should continue to be cautious. The courts have not clearly explained when liability may arise, but have recognized that platforms may ultimately have some responsibility for the content posted on their websites. Two general themes have emerged from the case law:
With these two themes in mind, we suggest the following principles to mitigate the risk of liability:
Active monitoring of the platform. Monitoring is important particularly where a platform takes an active role in determining the content which is promoted or “pushed” to certain users. In such cases, it is likely that the platform has an obligation to monitor or regulate what is posted. Monitoring may include a method of “flagging” suspicious content to the administrator at the low-end, to employing software or individuals to review material before it can be published at the high-end. Where a platform falls on that spectrum will be determined in part by both the sensitivity and expressive value of the content featured on the platform. The monitoring of content is a difficult balancing act. While some degree of moderation may reduce exposure to liability, platforms must also be careful not to over-interfere and risk litigation similar to that in the recent claim against Twitter. However, this latter risk can be mitigated by having clear and well-defined policies in place regarding active monitoring.
To discuss these issues, please contact the author(s).
This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.
For permission to republish this or any other publication, contact Janelle Weed.
© 2024 by Torys LLP.
All rights reserved.