Dr Philippa Smith of the Auckland University of Technology’s Institute of Culture, Discourse and Communication recently presented a talk on hate speech and free speech online to the University of Lancaster’s Language, Ideology and Power Research Group.
She began by highlighting that social media companies are currently being asked to take more responsibility for
hateful speech on their platforms. Extremists, said Smith,
use , whilst free speech activists claim that they
free speech to argue that they have a right to express their views
don’t support the views, but support the extremists right to express them—Voltaire’s classic line. The issue, said Smith, was that
the metaphoric line between hate speech and non-hate speech that is often bandied around in such debates is located differently for different people—she cited the ICCA‘s 2013 report on online hate speech, which claimed that
The danger [of , basing their argument on the idea that
hate speech] is real because Internet hate easily translates into real-world violence
There is a fine line between the exchange of values and ideas and the perpetuation of hate in the form of degradation and violence.
Smith’s presentation would focus on the hate speech/free speech debate within the online context and the benefits of counter-speech over censorship. First, she demonstrated why even defining
free speech and
hate speech is difficult, presenting a slew of subtly-conflicting definitions sourced from art. 19 of the Universal Declaration of Human Rights, the Irish Prohibition of Incitement To Hatred Act 1989 and §61 of the the New Zealand Human Rights Act 1993, to name a few. Some explicitly include
insulting language under the banner of
hate speech—as did England & Wales until recently—whilst the UN’s 1969 International Convention on the Elimination of All Forms of Racial Discrimination defines
racial discrimination as
any distinction, exclusion, restriction or preference based on race, colour, descent, or national or ethnic origin which has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise, on an equal footing, of human rights and fundamental freedoms in the political, economic, social, cultural or any other field of public life.
Smith also cited Erjavec & Kovačič who state that
[a]lthough there is no universally preferred definition of hate speech, some common elements seem to emerge, and Brown who writes that
hate speech [is] a heterogeneous collection of expressive phenomena held together by family resemblances. Similarly, there are different terms for what Smith calls
negative online behaviour, from
online shitstorm and
fear speech and
linguistic aggression. The issue when such behaviour is online is that
the infrastructure has an amplifying potential for speech to cause violence, even to the level of genocide or atrocity, as we see now in Myanmar. Constant exposure to negative content also desensitises, and governments are eager to use
dangerous speech as a justification for repression. This is along with the other negative effects of interfering with the expression of unpalatable views, which serves primarily to
whitewash reality and can lead those with such views to simply become more subtle or to go underground.
However, Brown suggests that
it is not just the law that we can use to response, and Strossen proposes counter-speech as a term that
encompasses any speech that counters a message with which one disagrees. Smith considers the field of counter-speech interesting, though under-researched and
theoretically anaemic. As such her research questions became, first,
in what ways are digital technologies enabling negative online behaviour? and, secondly,
to what extent can counter-speech initiatives be used? The example to be discussed here was the New Zealand Human Rights Commission’s Give Nothing to Racism campaign—specifically, the YouTube comments beneath it.
New Zealand has a bicultural relationship between the indigenous Māori and the
British settler cultures. The campaign focused on supposed
everyday racism, with the goal of
stop[ping] casual racism from developing into something more extreme, though Smith was unsure what this fear was based on as
nothing extreme has happened in New Zealand so far. The video stars director Taika Waititi of Black Panther fame in a parodic charity ad., urging people to
give nothing to racism as though it were a charitable cause that could only survive with their support. Smith analysed the comments under the video over a 6-week period, during which 62 people made 112 comments, and a third contained
overt racism. Good job, NZ Human Rights Commission.
Smith was primarily interested in how people responded to these hateful comments—how they deployed counter-speech. First, she identified a number of
alt-right intertextual features such as the use of terms like
SJW, triple parentheses and
Mein Kampf quotes. She then detailed their argumentation strategies: denial of racism; accusations that the video was political propaganda against white people; and
immigrants don’t belong in New Zealand rhetoric. Smith honed in on one particular interaction that she dubbed
a flame war polylogue. The polylogue consisted of: A, a
troll who criticised Black Panther supposed
blackwashing of white characters; B, a Muslim who enjoyed the increased representation that that brought with it; and the audience, who were largely silent observers.
Analysing the discourse, Smith pointed out that A refused use B’s name, instead using terms such as
Mahommadean. They quoted B verbatim and then took apart B’s claims one-by-one, which is apparently an alt-right argumentation tactic now. B initially responded politely, using A’s name, before shifting to trying to expose A as a troll, trying to exit the conversation, re-entering sincerely and then echoing A in their use of sarcasm and point-by-point rebuttals. B then shifted to using their own inflammatory remarks about Islam, taking on the persona that A had originally accused them of being. In short, B followed Hardaker’s 2015 taxonomy of troll response types to a T. A went quiet for a time and was replaced by trolls C & D before returning and calling B
an alt-right troll. B returned to their original politeness, wished A good luck and exited the conversation. A, however, had the final say, quoting B’s previous inflammatory comments. Others from the audience chimed in, including E who wrote
I felt myself losing IQ reading this. B’s counter-speech, Smith argued, was not directed against A, but against the Es of the world.
In her epilogue, Smith posed a number of questions. Did A’s comments class as
hate speech? Did being able to observe the flame war make us more aware of abusive speech, or should it have been taken down? Finally, do we risk
anitising the Internet and forcing such views underground to
fester and grow? As an addendum, Smith added that she returned to the video in the new year and found that the first three months’ worth of comments were nowhere to be seen, so clearly the NZ Human Rights Commission had made their choice. This, said Smith, hinted at the central tension of freedom of speech—
we want to know what people are thinking, but what should we do when seeing this can have a psychological effect on people? In my own view, we should embrace the fact that those affected will be more resilient, but I can empathise to a degree with those who choose the seemingly-more-compassionate view.