UK Internet Governance Forum 2018

Solutions for The Digital Age


~2,300 words


Last modified: December 2nd, 12,018 HE

It’s not illegal to be mean.

Barbora Bukovská

The 11th UK Internet Governance Forum was the other week, focused on the theme of Solutions for The Digital Age

Russel Haworth, Opening Remarks

Russell Haworth, CEO of Nominet, delivered the opening remarks, describing the DNS system as part of the country’s critical national infrastructure and highlighting the current debate over the introduction of DNS-over-HTTPS. He also cited an NCSC report on the success of their Active Cyber Defence scheme. Nominet were blocking some 11,000 malicious domains from reaching public sector networks each month, and Wenban-Smith cited a Nominet report released the day before that detailed the suspension of over 32,000 .uk over the last 12 months.

Margot James MP, Keynote Speech

Next up was Margot James MP, Minister of State for the Department for Digital, Culture, Media and Sport (DCMS). She began by claiming that her department were working to create an Internet that works for everyone by agreeing rules and norms. Apparently, the UK tech. sector is now worth more than those of Germany, France, Italy and Spain combined. The DCMS’ October 2017 Internet Safety Strategy green paper reported that 6 in 10 Internet users reported seeing inappropriate or harmful content online with 4 in 10 experiencing online abuse, whilst James suggests a whopping 8 in 10 are purportedly afraid to go online. Thusfar, the DCMS’ Social Media Code of Conduct appears to have gone nowhere, although they did issue a response to the green paper consultation in May 2018 and are reportedly readying a white paper with the Home Office for release this winter, which James said would target those behaviours that are harmful but not illegal or, as they are otherwise known, not the government’s business. Echoing some of the more insidious views of our former Prime Minister is not a good look for the DCMS, and this white paper looks set to be a pretty terrible Christmas present.

James claimed that an agile approach to regulation can build public trust and ensure a free and open Internet, and pointed to the expansion of the UK Council for Child Internet Safety, along with their rebrand to the UK Council for Internet Safety—online mothering is evidently not just for the children any more. Rounding off the tech. buzzward bingo board, James closed by claiming that AT could add as much as £232 billion to the UK economy by 2030, that we should establish an AI Council and an Office for AI and that we should introduce an IoT Code of Conduct, presumably faster than her department has managed the social media one.

Kari Lawler, AI Explainer: The potential impact of AI on my generation, and how we can plan for the future

An understandably nervous Kari Lawler, 15-year-old founder of Youth4AI, presented a brief overview of how she sees AI impacting the lives of young people. She delineated the difference between three levels of AI: narrow AI, which is better than humans at very specific tasks, but overly dependent on its training data; general AI, which is cognitively fully like a human, but unfortunately doesn’t exist; and super AI, which possesses intellect far greater than a human’s, but unfortunately super-doesn’t exist. She bemoaned media misrepresentation, and pointed out the very real issues of living in a world where it is becoming impossible to opt-out of data collection, which creates a need to ask ethical questions sooner rather than later.

Online Safety – Making the UK the safest place to be online

Lawler was followed by a goldfish bowl panel. The first question posed to the panel was who decides what is a hate crime. The first respondent, Chief Executive of Index on Censorship Jodie Ginsberg, said it was good that they had 20 minutes to completely solve this issue(!). She pointed out that the boundaries between offensive and criminal are fluid and ever-changing, and suggested that the UK had gone too far one way, with nine people per day arrested for offensive messaging online, half of which cases are dropped before prosecution. Even the police don’t know what is and isn’t prosecutable, suggested Ginsberg, pointing to the example of the drubbing South Yorkshire Police received after soliciting reports of non-crime hate incidents. Despite the CPS’s Social Media — Guidelines on prosecuting cases involving communications sent via social media, even our supposed experts are apparently uncertain as to what the law demands, and allows.

Ginsberg was followed by Barbora Bukovská, Senior Director of Law and Policy at ARTICLE 19. She took issue with the question, saying that it was less important to discuss who decides what hate speech is than it is to be asking what is the severity. She distinguished between different types of hate speech: incitement to genocide, which are always restricted; incitement to violence or discrimination, which are always restricted; other types of problematic speech, restriction of which varies; and, importantly, legal but offensive speech, which should not be restricted but should be addressed through other means. She also took issue with HM Government’s stated goal of making the UK the safest place to live and work online, arguing that the Internet would always be profoundly dangerous and that one cannot make it safe without becoming North Korea.

David Wright, Director of the UK Safer Internet Centre, responded next. He began talking about revenge porn, including recent-ish laws introduced in the UK to fight it, and stated that since their introduction over 21,000 such images—which are legal to view and possess, but illegal to initially post without consent—had been removed. He appeared to be arguing in favour of addition laws when Ginsberg pointed out that we already have a lot of laws that cover these things and that we must think carefully about whether new legislation is needed, and whether regulation [is] the best approach, an argument that reminded me of the furore over Christopher Chope MP’s objection to a recently-proposed upskirting law. She also added that we must be careful about conflating terms like abusive and harmful, illegal and legal with offensive.

Vicki Shotbolt, Founder and CEO of online safety service ParentZone, weighed in next. She referred to her work with parents and children who were worried about Googling about radicalisation in case they ended up on some sort of list. Shotbolt believes that there is a red line, and that wondered, if you can’t define what is hateful in legal terms, who will define it for families? It was then back to Ginsberg, who declared that there is a point at which speech acts become other acts, such as when phoning someone up repeatedly becomes harassment. She argued that the smear of hate speech was shutting down legitimate discourse and, whilst she was not saying no speech should have criminal consequences, [we] should be careful expanding our definitions.

An audience question asked whether kids need to be exposed to risk in order to learn how to protect themselves—the human antifragility argument. Prof. Sonia Livingstone of the London School of Economics admitted that research suggests that, yes, children do increase in resilience as a result of managed exposure to risk that is within their capacity to manage, the Internet poses a level of risk far beyond that. She chided the DCMS for being brave [enough] to write a white paper on behalf of the entire population. Whilst there is a gap in our awareness of how much bullying adults deal with on a day-to-day basis, figures suggest the levels are generally pretty low—only 5–15% of children report having been cyberbullied. Where Livingstone’s concern lies is in the fact that 90% of children and adults are living in a culture where they have seen [cyberbullying], where it is commonplace and largely unpunished, and that we don’t yet know what sort of a chilling effect that may have on their exercise of free speech—in a flash of cynicism I was reminded of this essay:

Cyberbullying is a huge problem! Yes, but not because it is hurtful, HA! no one cares about your feelings—but because criticism makes women want to be more private—and the privacy of the women is bad. The women have to be online, they do most of the clicking and receive most of the clicks. Anonymous cyberbullying is a barrier to increasing consumption, it’s gotta go.

It was time for Douglas White, Head of Advocacy at the Carnegie UK Trust, to chime in. He pointed out that only 15% of kids feel at home online and that the rate of mental health issues is rising (particularly amongst women—my mind flashed back to the above essay). This might, he suggested, be related to the rise of the digital world, the sharp increase in interaction speed and scale and the pervasive nature of online communications, which all serve to deepen things that already exist, such as marginalisation. Bukovská retorted that the phenomena are there and can cause harm, but that being risk-averse in inhibiting free speech does not mean disregarding these issues. She cited an example in which restricting speech could serve to further marginalise groups, such as if Twitter banned the word queer, which serves double duty as an insult and a term of identity. Bukovská argued that parents should be the ones to instill values in children, not the law, and that it is not illegal to be mean. Countering suggestions that the state should define the threshold, Bukovská claimed that the threshold was defined, and that the problem is in enforcement and transparency.

Ginsberg proposed that her and her fellow panelists were in agreement that a unfriendly online discourse could have a chilling effect on speech, but said that they differed in their proposed solutions—bans, as in the case of the DCMS and others, or other methods. Livinstone said that the enforcement of codes of conduct could not be left to the social media platforms, who come up with daft policies, and Ginsberg suggested that the #metoo movement shows people have collective power to do good, and that there is room for an independent oversight body, but that we must not underestimate the scale of the problem, citing Facebook’s 1 million complaints received per day. Bukovská suggested the formation of a Social Media Council, but cautioned that standards should be the same, but applied differently (e.g., dominant platforms such as Twitter and Facebook must be treated differently to smaller platforms such as Mumsnet). Wright pointed out that 46% of schools have no professional development programs, and that this shows the capability of those working with children is low, whilst White said we should focus on the systems level.

A DCMS representative again reiterated his department’s plans, stating that the upcoming DCMS–Home Office white paper would signal a shift in approach to proposing legislation whilst claiming that the UK is a world leader in innovation-friendly regulation. The DCMS’ goals, allegedly, are to prioritise the protection of freedom of expression, promote business and to be adaptable. He mentioned one of Livingstone’s recent papers and pointed out that the Department of Education are working to include online safety in the national curriculum, as well as working with young people on mental health issues. He also mentioned the ICO‘s work on an Age Appropriate Design Code for processing the personal data of young people and the work of the Council of Internet Safety.

Cybersecurity and the Internet of Things — Security by Design

I missed the next few talks due to another engagement, but returned in time for this discussion of IoT security. The DCMS’ Edward Venmore-Rowland opened with reference to his department’s recent Code of Practice for consumer IoT security, which should ensure products come to market with security built in. He was followed by Talal Rajab, Head of Programme — Cyber and National Security at techUK, who said that consumers should not be expected to understand the ins and outs of IoT security, and that the onus of secure by design should fall on the manufacturer. Stephen Pattison, ARM’s drawling Vice President of Public Affairs, wondered if there is a risk here that we’re fighting the last war and that the measure of our success will be how we respond to the next crisis. He wondered aloud: why do so many people who are good at coding go over to the dark side? We don’t have surgeons out there giving criminals facelifts. He also claimed that the average rate of bugs to lines of code was 1:250,000, and that something like a smart car could contain over 500 million lines of code.

ARM, he said, propose that we have a need for a digital social contract between industry and consumers, and the industry’s rapid, joint response to the recent Spectre and Meltdown vulnerability disclosures suggested a model for future reponses. The chair of the debate, Internet Society Chief Internet Technology Officer Olaf Kolkman, asked the panel what the expectations should be of the consumers. Pattison replied that we must shift away from thinking consumers can take responsibility for their security. Kolkman suggested that minimum standards get you to compliance thinking, not security thinking and Eva Blum-Dumontet, Privacy Research Officer at Privacy International, criticised the DCMS guidelines for containing no mention of the word liability. Matthew Shears, ICANN Director of Cyber, Global Partners Digital and Board Member, echoed Mozilla in claiming that every device with no password makes the Internet more fragile and that we need to start seeing ourselves as part of an ecosystem. Kolkman drew the debate to a close by calling on nonstate actors to get their shit together.