There is little doubt remaining amongst experts, policy makers and the public, that the ways in which our digital infrastructure are designed and incentivized have had widespread social, economic, and political costs. Despite years of assurances from big tech and wishful thinking from governments, the market has proven unwilling or unable to self-regulate. And so democratic governments around the world are finally stepping up.
The largest and most vexing piece of this policy agenda is what to do about harmful content online. This is in part because it sits squarely at the intersection of the two largest challenges of our platform-based internet: the truly massive scale of the content posted online (hundreds of billions of pieces of content a day); and the financial incentives for the distribution of this content being calibrated for virality, engagement and speed rather than the public good.
But is also because ensuring online safety also touches on the core democratic right of free expression. Democratic governments around the world define and regulate speech differently, and so there is no global set of rules for platforms to follow – this will be determined country by country. In the Canadian context, the Charter provides robust protection for free speech, while recognizing that governments can limit speech to prevent harms, provided the limits are reasonable and justified in a free and democratic society.
There are two emerging methods of regulating online safety. The first approach, most notably implemented by Germany, addresses harmful content once it has already been posted. Platform companies are given 24 hours to remove content that is flagged as being illegal or be subject to fines of up to 50 million euros. The problem is that the fines are so large and the task so opaque that this approach can risk over-censorship. This was the model initially proposed by the Canadian government last summer, and which was widely critiqued, including by us.
The EU and the UK have developed an alternate approach that focuses not on the individual pieces of bad content, but on the system and incentives that lead to it being created and spread. Instead of tackling the bad things downstream, they look upstream to the design of the system as a whole.
This approach focuses on the level of risk that is currently allowed to exist within the digital ecosystem. When a platform develops a new product, say a change to their feed algorithm, or a video-sharing tool targeted at teens, there is no requirement for them to take potential harm into account. They can optimise their products for profit, with no requirements to factor in risk posed to the safety of their users.
What could such an upstream, risk based, approach toward online safety look like in Canada?
First, the system itself must be made radically transparent. One of the core problems with digital platforms is their opacity. Platform companies such as Facebook and Google could be compelled to share privacy-protected data with the public and with varying levels of researchers to help us understand the system. Additionally, since targeted advertising is both the lifeblood of the platform ecosystem and cause of many of its harms, platforms should be made to regularly disclose and archive, in a standardized format, specific information about every digital advertisement and piece of paid content posted on their platforms.
Second, platforms must also be held accountable for how they build their products. To do so, we should draw on a concept already in use in Canadian law – that of a statutory duty to act responsibly. This would place the onus on platforms themselves to demonstrate that they have acted in a manner that would minimize the harm of the products they build and offer to Canadians. Liability protections could be made contingent on risk assessments and human rights audits being conducted, and a well-resourced regulator could have the power to audit these algorithmic systems.
Third, it is critical to shift the balance between platforms and their users. We can do so through mandated interoperability and data portability, a serious national civic education and digital literacy initiative, and critically, significantly strengthened and long overdue data privacy protection.
This approach is not about responding or reacting to content, or speech, but about assessing the level of risk and implementing product safety standards so that platforms are being subject to the same statutory duty to act responsibly as other consumer facing products are.
For too long the issue of online harms has been erroneously framed as one of individual bad actors and the regulation of speech, but the problem is one of systemic risk and it must be addressed as such. Canada now has the chance to learn from and build on the policies attempted in other countries and get it right.
Former Supreme Court chief justice Beverley McLachlin and Taylor Owen, director of Centre for Media, Technology & Democracy at McGill, are co-chairs of the Canadian Commission on Democratic Expression, the final report for which was released last week.