In September 2020 the Government announced a new National Data Strategy, which aspired to “make the UK the safest place in the world to go online.” Safety was at the heart of this strategy for tech innovation and growth, and its legislative manifestation is the draft Online Safety Bill, which sets out a new regulatory regime to tackle harmful content online by placing a duty of care on certain internet service providers that allow users to upload content and search the internet. Online safety, security, and accessibility were the focus of the Digital Poverty and Inequalities Summit on Wednesday, and the bill was centre stage.
Roundtable speakers and contributors included members of the Commons and Lords involved in drafting or evaluating the bill, representatives of Barnardo’s children’s charity, the Children’s Media Centre, TikTok, the Centre for Countering Digital Hate, and the NSPCC to name a few. Unlike the other summit roundtables, this one was distinctly more focused — with a piece of draft legislation in the pipeline, there is a clear goal with potential for impact on how people experience the internet. I was struck by how this fact rendered the discussion more consequential but perhaps less capacious. With the country on the cusp of legislation that would protect people from a panoply of online harms, harmful but elusive issues like inequality, bias, and discrimination received hardly a mention.
That said, the Online safety bill has been heralded as groundbreaking, even revolutionary, with a great deal of potential to set a benchmark that more of the world will follow. Undoubtedly the anticipation around this bill is in part because it is arriving “late” in the evolution of the internet and online platforms. One speaker called it “a good late step.” It is also in part because its present arrival opens up the potential for it to be a repository of our regulatory hopes and dreams about how to make the internet better — to fix what has seemingly gone wrong. But if it is to be effective, the bill must rise above the specific grievances that make it urgent and necessary — to tackle the systemic and system-level issues that underpin the worst abuses online. “If too much is loaded onto this legislation,” one speaker warned, “it will fall under its own weight.”
Although perhaps contributing to that burden, the discussion centred on several issues that speakers hoped the bill would ultimately address:
- The Online Safety Bill must do more to address the most egregious harms to children, especially exposure to pornography and grooming.
“Childhood lasts a lifetime,” one roundtable speaker remarked. And it was clear that most of the contributors to the discussion viewed the protection of children as a primary concern for the bill. Speakers see the legislation as a chance to achieve what the 2017 Digital Economy Act has failed to do: implement robust age verification for pornographic content and reduce child exposure to sexual content and sexual exploitation, such as grooming. Behind these concerns is a broader anxiety about the long-term social impact that these experiences can have on behaviour and wellbeing. And negative online experiences are arguably a bigger issue, encompassing a whole range of social and socialising experiences. According to The Wireless Report, four out of every ten young people have been subject to online abuse, and 25 percent of young people have received an unwanted sexual message online. Ofcom reports that more than half of 12 to 15 year-olds have had a “negative” experience online, such as bullying, and 95 percent of 12 to 15s who use social media and messaging apps said they felt people were mean or unkind to one another online.
Roundtable contributors also raised the issue of encryption and the potential of end-to-end encryption on social media platforms in particular to hide the activities of child abusers. There are no simple answers to these thorny issues. Encryption can hide illegal or harmful activities, but it can also protect privacy, activism, and free speech. So called “back doors” that would allow law enforcement to access certain encrypted content also opens up the potential for exploiting those security weaknesses by others. Although some speakers returned to the “duty of care” outlined in the draft bill to argue that platforms will have to prove that encryption, in combination with other design choices on platforms, is consistent with a duty of care to users, few of the issues that sit at the uncomfortable nexus between safety (or its foil, harm) and security are black-and-white. Flexibility in approach will likely be the bill’s ultimate strength, but it inherently leaves open many questions that people want answers to. Really, what people want is for tech companies to have to answer to them.
- Ofcom must be adequately supported to take on its new power and responsibility under the bill.
Another theme from the discussion was the need for Ofcom to be resourced effectively to exercise its new powers under the draft legislation and to shoulder its new regulatory responsibility. Indeed, this is a whole new frontier for the regulator presently tasked with overseeing the telecoms market. The Ofcom chief executive has expressed some trepidation about the sheer volume of user complaints the regulator may face and the legal battles likely to be fought with tech companies that fail to comply with the new regulations. Secretary of State for Digital, Culture, Media and Sport Nadine Dorries wants criminal liability for tech company directors, setting Ofcom up for a confrontation with the likes of Mark Zuckerburg.
Bill supporters at the roundtable were quick to offer reassurance that Ofcom would be equipped to handle its new duties, but it is understandable that questions remain. The multi-billion dollar platforms in the eye of the storm have struggled (and often failed) to handle reported abuses on their own sites, which host billions of users speaking different languages and with different cultural reference points. Critics of big tech will argue (probably rightly) that those failures are largely down to lack of will; harmful content still makes money. But there are other factors, too. They are also due to an egregious lack of local, contextual knowledge — essential for tackling harms, which are socially constructed and embedded. And due to scale — companies have employed both human moderators and algorithms in an effort to manage the volume of content and complaints, and it is still not enough. Ofcom has reason to be concerned. And therefore, the bill’s drafters do, too.
I was left reflecting on the important questions we still need to ask about the aspirational outcomes the bill is meant to achieve. Goals like transparency and accountability will be most impactful at the system level in taking companies to task, but what about user empowerment and agency? Big tech might think about users as a stream of data points, but this bill has the potential to treat them like individuals — human beings with a context as well as a complaint — and that would be truly revolutionary. So, to return to this theme from the roundtable, is Ofcom prepared to perform that role?
- A legislated approach to online harms must be adaptive and focused on the systems level in order to be future-facing.
The last theme worth drawing out from the roundtable discussion was the issue of future-proofing the bill. “Future-proof” is a common expression in technology development and deployment, but I think it is not quite the right way to frame the concept. It would be better (albeit less catchy) to conceptualise it as “uncertainty-aware.” Coupled with the almost universally shared feeling that this bill might be too little, too late in a digital ecosystem that has developed largely without the kind of toothy government regulation that can bite, there was also a palpable feeling in this Zoom call of wanting to get it right this time: getting ahead of the game, rather than playing catch-up later on.
One roundtable contributor said, “When rules are too prescriptive, they’re easy to get around.” The solution, according to multiple contributors at the roundtable, will be to ensure the bill can be adapted to yet-unanticipated future scenarios. It must comprehensively address and define (to some extent) the dangers of the internet as we know it today, but it must also leave open the possibility that new powers and responsibilities may need to be bestowed on the regulatory process. It is important to recognise that this uncertainty-aware approach is not the child of necessity, born of the digital age. It is how laws are often made (and changed). In fact, one speaker explained that the idea behind the bill is not to do something radically new but to “level the field between online and other environments.” As media scholars have long argued, while the digital age has ushered in unprecedented technological and societal changes, it is overly sensational to treat it as entirely new and unfamiliar.
What is difficult, I would argue, in the drafting of this bill is that there are such clear “perpetrators” of harm exacerbation and perpetuation: digital platform companies (Facebook and Google, for instance). This is what happens when we outsource our democracy to undemocratic companies in Silicon Valley, one speaker said. They are in our mind’s eye when we think about how to make this law work. And that is helpful on the one hand because it can concretise certain concepts and terminology in an effort to close loopholes for the companies we know we want to get their houses in order. But on the other hand, we also somehow need to keep a focus on the bigger picture: tackling online harms requires challenging the underlying logic of the digital economy, which trades on people’s personal data and analyses it without adequate consent in order to manipulate behaviour and generate more profit. At least one speaker made this point: it is not as much about the harmful material online as it is about how that material is surfaced and promoted by algorithmic processes. And this is an important point. As an investigation by The Markup found recently, algorithms on Facebook show some users extreme content not just once but hundreds of times. It is about the content and it is about what makes the content valuable — user attention.
A joint committee held hearings about the Online Safety Bill that ended earlier in November and is set to conclude its report by December 10th and publish shortly after that. It will be interesting to see which aspects of this conversation — and contributions to the hearings — make it into the revised document.
One theme that has consistently emerged in all of the previous roundtables during the Summit was absent in this one: the social and societal dimensions of online safety. One speaker did mention that there is a continuum between the online and the offline when it comes to harms. But there is a risk that in focusing on defining what constitutes a harm worthy of regulation, we never get to the crucial conversation about the uneven distribution of harms in society — how and why certain harms disproportionately accumulate for certain people. We know, for instance, that there is a gendered dimension to pornographic content and exposure, women, girls, and LGBTQIA+ individuals have faced increased online harassment during the pandemic, and children with an impacting/limiting condition are more likely to experience bullying and other negative interactions online. But issues like accessibility did not feature in the discussion. Many of the harms exacerbated by digital content are socially embedded and conditioned. Therefore, platform regulation must accompany comprehensive sex and relationship education that addresses not only interpersonal communication and interactions online but also media literacy. Our digitally mediated lives are a mirror to norms, behaviours, and inequalities in society more broadly; the capitalisation of data and the algorithmic manipulation of data for commercial ends can turn the mirror into an anamorphic funhouse. A truly systems-level approach to online safety needs to take on systems of oppression and marginalisation both in cyberspace and in society as a whole.
This can only be done with the participation of people in the processes of accountability outlined in the bill. People need to be empowered not only to report harms but to define what harms are (right now, the draft bill leaves the category open to interpretation by the Culture Secretary, Ofcom, and Parliament in consultation with one another). And in addition to algorithmic transparency and accountability to a regulator, there must be transparency to the citizen-user in the form of meaningful consent regimes that give people more actual control over their data and reporting regimes that make people feel like the harms they have experienced are real, legitimate, and actionable. Legislation wields the semantic power to define certain terms and relationships, like user and harm. Tech companies have built digital spaces that define us (users) as consumers first and foremost. The law has an obligation to reassert our citizenship, instead.
This roundtable was hosted by the APPG Digital Skills, in collaboration with the APPG Data Poverty and APPG PICTFOR and supported by the Digital Poverty Alliance.