By: Emily Laidlaw
The current controversy concerning the new Calgary-based app Peeple which will allow users to rate anybody they know – from their colleagues, to their friends, to their exes and neighbours – raises many questions familiar to internet lawyers. What are the rights of the subject matters of these ratings? To privacy? To dignity? What rights of free speech exist for anyone using these apps? And what are the responsibilities of the app developer, legally or ethically? For more on this controversy, see here, here, and here. There are some that question whether the app is a hoax, and I question it myself. Regardless, the Peeple controversy serves as a useful platform for discussions of wider issues in Internet governance. While there is much to be analysed concerning the privacy and harassment implications of this app, with this post I am going to focus on a different aspect of the controversy and that is the social responsibility of technology companies for human rights. By shedding light on the discussions happening in the international community I hope it contextualizes why things like Peeple are so controversial; they strike at the core of larger problems concerning the roles and responsibilities of businesses for human rights and the line between law and voluntary commitments. My recent research on this topic has been focused on free speech, so I will discuss the issue here in that context.
New technologies have changed the way we communicate challenging traditional structures of speech regulation. In the Internet context, the transnational, instantaneous nature of communications makes it difficult for governments to directly control the information that enters and leaves a country. At the same time the power of companies, which control this information flow, increases, because the communication technologies that enable or disable participation in discourse online are often privately owned. In order to find information, we use search engines. In order to share information we use communication platforms such as Twitter. In order to access the Internet, we need to use Internet Service Providers. Thus we inevitably rely on these companies to exercise the right to freedom of expression online and they thereby become gatekeepers to our online experience. This is problematic for a human rights system that has treated human rights as a government responsibility, and has effectively privatised human rights in the digital environment.
Our reliance on these gatekeepers to exercise the right to free speech has had two effects. First, such gatekeepers have increasingly been the targets of legal measures designed to capitalise on their capacity to regulate third party conduct. This ranges from orders for ISPs to block access to copyright infringing websites and other unlawful content as seen in United Kingdom cases involving Pirate Bay and Newzbin2, to orders by the Egyptian government during the Arab Spring in 2011 for Vodafone to switch off mobile networks. These orders put pressure on companies, both domestically and internationally, to be advocates for users’ free speech rights and to have in place governance codes that guide their conduct in this respect.
Second, in the Western World speech regulation in cyberspace has largely been left to self-regulation much the same way that regulation of the Internet in general has been light-touch. When Facebook decides to delete a group it deems offensive, Twitter suspends a user’s account for the content of his or her tweets, or Amazon decides to no longer host a site such as Wikileaks, the determination tends to be made outside the legal system of human rights. The result is a system of private governance running alongside the law without any of the human rights safeguards one normally expects of state-run systems, such as principles of accountability, predictability, accessibility, transparency and proportionality.
The business, human rights and technology conundrum has received increasing public attention in recent years. Since 2010 we have seen a paradigm shift at an international level in the recognition of human rights in cyberspace. Access to the Internet as a fundamental right received the United Nations stamp of approval in a report by Frank La Rue, the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. In 2012 the UN Human Rights Council passed a resolution affirming Internet freedom as a basic human right, in particular the right to freedom of expression. At a European level we have seen the Court of Justice of the European Union and the European Court of Human Rights issue judgments with strong rights-based arguments directed at the activities of technology companies. This can be seen in cases such as Scarlet v SABAM followed by Sabam v Netlog regarding ISP filtering, Ahmet Yildirim v Turkey regarding hosts and SL, Google Inc v Agencia Espanola de Proteccion de Datos, Marios Costeja Gonzalez regarding a right to be forgotten on search engines.
At the same time, the business and human rights agenda has been a focal point of international governance discussions, most importantly with the work of John Ruggie in drafting the UN Guiding Principles. They were endorsed by the UN in 2011 and have been widely praised by government, businesses and NGOs. They have been incorporated into many agendas on CSR, as seen in Europe and the UK, and have formed the basis of industry CSR codes and guides, such as the European Commission Guidance for ICTs and the Global Network Initiative. Despite its apparent popularity, the Guiding Principles are controversial. There continue to be calls for a treaty-based governance regime for the human rights obligations of businesses.
What we need to do now is move the conversation forward by extending the Internet regulatory debate to take account of CSR. In my research, most recently in my book Regulating Speech in Cyberspace: Gatekeepers, Human Rights and Corporate Responsibility (Cambridge University Press, 2015), I seek to challenge the traditional conception of human rights as a relationship between citizens and state, arguing that in the digital age the experience of human rights in general, and free speech in particular, often occurs with and through private parties. At the moment, companies have been largely left alone to address issues of free speech through CSR frameworks such as in-house codes of conduct seen in Terms of Service and other company policies, through the work of regulatory bodies such as the Internet Watch Foundation to address child sexual abuse images, and industry initiatives such as the Global Network Initiative to address privacy and free speech.
Apps such as Peeple are ripe platforms for abuse, and while Peeple corporate risks liability under defamation and data protection laws for hosting unlawful content, it also carries a wider social responsibility to take care in how it runs its platform. Peeple’s commitment to privacy and free speech, or lack thereof (and regulatory savvy), can decide some aspect of your rights online. These types of informal corporate social responsibility codes and self-regulatory frameworks therefore emerge as powerful forces in shaping the right to freedom of expression online. In my book I propose an alternative governance model, the details of which are beyond the scope of this post. However, the take-away here is that in assessing the bigger picture of how to regulate the Internet, how to facilitate and protect rights online, and how to judge the behaviour of the creators of such things as Peeple, we must do a better job of understanding the promise and limits of CSR.
To subscribe to ABlawg by email or RSS feed, please go to https://ablawg.ca
Follow us on Twitter @ABlawg