Online hate speech
Online hate speech is a type of speech that takes place online, generally social media or the internet, with the purpose of attacking a person or a group on the basis of attributes such as race, religion, ethnic origin, sexual orientation, disability, or gender.
Hate speech online is situated at the intersection of multiple tensions: it is the expression of conflicts between different groups within and across societies; it is a vivid example of how technologies with a transformative potential such as the Internet bring with them both opportunities and challenges; and it implies complex balancing between fundamental rights and principles, including freedom of expression and the defense of human dignity.
Hate speech is a broad and contested term. Multilateral treaties such as the International Covenant on Civil and Political Rights have sought to define its contours. Multi-stakeholders processes have been initiated to bring greater clarity and suggest mechanisms to identify hateful messages. And yet, hate speech continues largely to be used in everyday discourse as a generic term, mixing concrete threats to individuals' and groups' security with cases in which people may be simply venting their anger against authority. Internet intermediaries—organizations that mediate online communication such as Facebook, Twitter, and Google—have advanced their own definitions of hate speech that bind users to a set of rules and allow companies to limit certain forms of expression. National and regional bodies have sought to promote understandings of the term that are more rooted in local traditions.
The Internet's speed and reach makes it difficult for governments to enforce national legislation in the virtual world. Issues around hate speech online bring into clear relief the emergence of private spaces for expression that serve a public function, and the challenges that these spaces pose for regulators. Some of the companies owning these spaces have become more responsive towards tackling the problem of hate speech online.
The character of hate speech online and its relation to offline speech and action are widely talked about—by politicians, activists and academics—but the debates tend to be removed from systematic empirical evidence. The character of perceived hate speech and its possible consequences has led to placing much emphasis on the solutions to the problem and on how they should be grounded in international human rights norms. Yet this very focus has also limited deeper attempts to understand the causes underlying the phenomenon and the dynamics through which certain types of content emerge, diffuse and lead—or not—to actual discrimination, hostility or violence.
Definitions
Hate speech
Hate speech lies in a complex nexus with freedom of expression, individual, group and minority rights, as well as concepts of dignity, liberty and equality. Its definition is often contested.In national and international legislation, hate speech refers to expressions that advocate incitement to harm based upon the target's being identified with a certain social or demographic group. It may include, but is not limited to, speech that advocates, threatens, or encourages violent acts. The concept may extend also to expressions that foster a climate of prejudice and intolerance on the assumption that this may fuel targeted discrimination, hostility and violent attacks. At critical times, such as during elections, the concept of hate speech may be prone to : accusations of fomenting hate speech may be traded among political opponents or used by those in power to curb dissent and criticism. Hate speech can be identified by approximation through the degrading or dehumanizing functions that it serves. There may be two types of messages. The first is to the targeted group and functions to dehumanize and diminish members assigned to this group. It often sounds more or less like:
"Don't be fooled into thinking you are welcome here. You are not wanted, and you and your families will be shunned, excluded, beaten, and driven out, whenever we can get away with it. We may have to keep a low profile right now. But don't get too comfortable. Be afraid."
Another function of hate speech is to let others with similar views know they are not alone, to reinforce a sense of an in-group that is under threat. A typical message sent this time to like-minded individuals can read like:
"We know some of you agree that these people are not wanted here. We know that some of you feel that they are dirty. Know now that you are not alone. There are enough of us around to make sure these people are not welcome. There are enough of us around to draw attention to what these people are really like".
Hate speech relies on tensions, which it seeks to re-produce and amplify. Such speech unites and divides at the same time. It creates "us" and "them".
Characteristics
The proliferation of hate speech online, observed by the UN Human Rights Council Special Rapporteur on Minority Issues, poses a new set of challenges. Both social networking platforms and organizations created to combat hate speech have recognized that hateful messages disseminated online are increasingly common and have elicited unprecedented attention to develop adequate responses. According to HateBase, a web-based application that collects instances of hate speech online worldwide, the majority of cases of hate speech target individuals based on ethnicity and nationality, but incitements to hatred focusing on religion and class have also been on the rise.While hate speech online is not intrinsically different from similar expressions found offline, there are peculiar challenges unique to online content and its regulation. Those challenges related to its permanence, itinerancy, anonymity and cross-jurisdictional character are among the most complex to address.
Hate speech can stay online for a long time in different formats across multiple platforms, which can be linked repeatedly. As Andre Oboler, the CEO of the Online Hate Prevention Institute, has noted, "The longer the content stays available, the more damage it can inflict on the victims and empower the perpetrators. If you remove the content at an early stage you can limit the exposure. This is just like cleaning litter, it doesn't stop people from littering but if you do not take care of the problem it just piles up and further exacerbates." Twitter's conversations organized around trending topics may facilitate the quick and wide spreading of hateful messages, but they also offer the opportunity for influential speakers to shun messages and possibly end popular threads inciting violence. Facebook, on the contrary, may allow multiple threads to continue in parallel and go unnoticed; creating longer lasting spaces where certain individuals and groups are offended, ridiculed and discriminated.
Hate speech online can be itinerant. Even when content is removed, it may find expression elsewhere, possibly on the same platform under a different name or on different online spaces. If a website is shut down, it can quickly reopen using a web-hosting service with less stringent regulations or via the reallocation to a country with laws imposing higher threshold for hate speech. The itinerant nature of hate speech also means that poorly formulated thoughts that would have not found public expression and support in the past may now land on spaces where they can be visible to large audiences.
Anonymity can also present a challenge to dealing with hate speech online. "he internet facilitates anonymous and pseudonymous discourse, which can just as easily accelerate destructive behavior as it can fuel public discourse". As Drew Boyd, Director of Operations at The Sentinel Project, has stated, "the Internet grants individuals the ability to say horrific things because they think they will not be discovered. This is what makes online hate speech so unique, because people feel much more comfortable speaking hate as opposed to real life when they have to deal with the consequences of what they say". Some governments and social media platforms have sought to enforce real name policies. Such measures have been deeply contested as they hit at the right to privacy and its intersection with free expression. The majority of online trolling and hate speech attacks come from pseudonymous accounts, which are not necessarily anonymous to everyone. Genuinely anonymous online communications are rare, as they require the user to employ highly technical measures to ensure that he or she cannot be easily identifiable.
A further complication is the transnational reach of the Internet, raising issues of cross jurisdictional co-operation in regard to legal mechanisms for combating hate speech. While there are Mutual Legal Assistance treaties in place among many countries, these are characteristically slow to work. The transnational reach of many private-sector Internet intermediaries may provide a more effective channel for resolving issues in some cases, although these bodies are also often impacted upon by cross-jurisdictional appeals for data. State prosecution of online hate speech can be difficult when the countries involved have different commitments and understanding what hate speech is. This is particularly apparent in the context of the US, who both host a large portion of internet servers, as well as having a deep-rooted constitutional commitment to Freedom of Speech.
Unlike the dissemination of hate speech through conventional channels, hate speech dissemination online often involves multiple actors, whether knowingly or not. When perpetrators makes use of an online social platform to disseminate their hateful message they do not only hurt their victims, but may also violate terms of service in that platform and at times even state law, depending on their location. The victims, on their part, may feel helpless in the face of online harassment, not knowing to whom they should turn to for help. In the types of responses mapped throughout the study, it appears that collective action, usually undertaken by nongovernmental organizations and lobby groups, has been an effective modus operandi to raise awareness and encourage different stakeholders to take action.
''Stormfront'' precedent
In the aftermath of 2014's dramatic incidents, calls for more restrictive or intrusive measures to contain the Internet's potential to spread hate and violence are common, as if the links between online and offline violence were well known. On the contrary, as the following example indicates, appearances may often be deceiving. Stormfront is considered the first "hate website". Launched in March 1995 by a former Ku Klux Klan leader, it quickly became a popular space for discussing ideas related to Neo-Nazism, White nationalism and White separatism, first in the United States of America and then globally. The forum hosts calls for a racial holy war and incitement to use violence to resist immigration and is considered a space for recruiting activists and possibly coordinating violent acts. The few studies that have explored who the users of Stormfront actually are depict a more complex picture. Rather than seeing it as a space for coordinating actions, well-known extreme right activists have accused the forum to be just a gathering for "keyboard warriors". One of them for example, as reported by De Koster and Houtman, stated, "I have read quite a few pieces around the forum, and it strikes me that a great fuss is made, whereas little happens. The section activism/politics itself is plainly ridiculous. Not to mention the assemblies where just four people turn up". Even more revealing are some of the responses to these accusations provided by regular members of the website. As one of them argued, "Surely, I am entitled to have an opinion without actively carrying it out. I do not attend demonstrations and I neither join a political party. If this makes me a keyboard warrior, that is all right. I feel good this way. I am not ashamed of it". De Koster and Houtman surveyed only one national chapter of Stormfront and a non-representative sample of users, but answers like those above should at least invite to caution towards hypotheses connecting expressions and actions, even in spaces whose main function is to host extremist views. The Southern Poverty Law Center published a study in 2014 that found users of the site "were allegedly responsible for the murders of nearly 100 people in the preceding five years."Frameworks
International principles
Hate speech is not explicitly mentioned in many international human rights documents and treaties, but it is indirectly called upon by some of the principles related to human dignity and freedom of expression. For example, the 1948 Universal Declaration of Human Rights, which was drafted as a response to the atrocities of the World War II, contains the right to equal protection under the law in Article 7, which proclaims that: "All are entitled to equal protection against any discrimination in violation of this Declaration and against any incitement to such discrimination". The UDHR also states that everyone has the right to freedom of expression, which includes "freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers".The UDHR was decisive in setting a framework and agenda for human rights protection, but the Declaration is non-binding. A series of binding documents have been subsequently created to offer a more robust protection for freedom of expression and protection against discrimination. Out of those documents, the International Covenant on Civil and Political Rights is the most important and comprehensive when addressing hate speech and contains the right to freedom of expression in Article 19 and the prohibition of advocacy to hatred that constitutes incitement to discrimination, hostility or violence in Article 20. Other more tailored international legal instruments contain provisions that have repercussions for the definition of hate speech and identification of responses to it, such as: the Convention on the Prevention and Punishment of the Crime of Genocide, the International Convention on the Elimination of All Forms of Racial Discrimination, ICERD, and, to a lesser extent, the Convention on the Elimination of All Forms of Discrimination against Women, CEDAW.
Hate speech and the ICCPR
The ICCPR is the legal instrument most commonly referred to in debates on hate speech and its regulation, although it does not explicitly use the term "hate speech". Article 19, which is often referred to as part of the "core of the Covenant", provides for the right to freedom of expression. This sets out the right, and it also includes general strictures to which any limitation of the right must conform in order to be legitimate. Article 19 is followed by Article 20 that expressly limits freedom of expression in cases of "advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence". The decision to include this provision, which can be characterised as embodying a particular conceptualisation of hate speech, has been deeply contested. The Human Rights Committee, the United Nations body created by the ICCPR to oversee its implementation, cognizant of the tension, has sought to stress that Article 20 is fully compatible with the right to freedom of expression. In the ICCPR, the right to freedom of expression is not an absolute right. It can legitimately be limited by states under restricted circumstances:"3. The exercise of the rights provided for in paragraph 2 of this article carries with it special duties and responsibilities. It may therefore be subject to certain restrictions, but these shall only be such as are provided by law and are necessary: For respect of the rights or reputations of others; For the protection of national security or of public order, or of public health or morals."
Between Article 19 and Article 20, there is a distinction between optional and obligatory limitations to the right to freedom of expression. Article 19 states that limitations on freedom of expression "may therefore be subject to certain restrictions", as long as they are provided by law and necessary to certain legitimate purposes. Article 20 states that any advocacy of hatred that constitutes incitement to discrimination, hostility or violence "shall be prohibited by law". Despite indications on the gravity of speech offenses that should be prohibited by law under Article 20, there remains complexity. In particular there is a grey area in conceptualising clear distinctions between expressions of hatred, expression that advocate hatred, and hateful speech that specifically constitutes incitement to the practical harms of discrimination, hostility or violence. While states have an obligation to prohibit speech conceived as "advocacy to hatred that constitutes incitement to discrimination, hostility or violence", as consistent with Article 20, how to interpret such is not clearly defined.
Other international legal instruments
ICERD
The International Convention on the Elimination of All Forms of Racial Discrimination, which came into force in 1969, has also implications for conceptualising forms of hate speech. The ICERD differs from the ICCPR in three respects. Its conceptualisation of hate speech is specifically limited to speech that refers to race and ethnicity. It asserts in Article 4, paragraph, that state parties:"Shall declare as an offence punishable by law all dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination, as well as all acts of violence or incitement to such acts against any race or group of persons of another color or ethnic origin, and also the provision of any assistance to racist activities, including the financing thereof; This obligation imposed by the ICERD on state parties is also stricter than the case of Article 20 of the ICCPR covering the criminalisation of racist ideas that are not necessarily inciting discrimination, hostility or violence."
An important difference is in the issue of intent. The concept of "advocacy of hatred" introduced in the ICCPR is more specific than discriminatory speech described in the ICERD, since it is taken to require consideration of the intent of author and not the expression in isolation—this is because "advocacy" is interpreted in the ICCPR as requiring the intent to sow hatred. The Committee on the Elimination of Racial Discrimination has actively addressed hate speech in its General Recommendation 29, in which the Committee recommends state parties to:
" Take measures against any dissemination of ideas of caste superiority and inferiority or which attempt to justify violence, hatred or discrimination against descent-based communities; Take strict measures against any incitement to discrimination or violence against the communities, including through the Internet; Take measures to raise awareness among media professionals of the nature and incidence of descent-based discrimination;"
These points, which reflect the ICERD's reference to the dissemination of expression, have significance for the Internet. The expression of ideas in some online contexts may immediately amount to spreading them. This is especially relevant for private spaces that have begun to play a public role, as in the case of many social networking platforms.
Genocide Convention
Similarly to the ICERD, the Genocide Convention aims to protect groups defined by race, nationality or ethnicity, although it also extends its provisions to religious groups. When it comes to hate speech the Genocide Convention is limited only to acts that publicly incite to genocide, recognized as "acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group", regardless of whether such acts are undertaken in peacetime or in wartime. Specifically gender-based hate speech is not covered in depth in international law.CEDAW
The Convention on the Elimination of All Forms of Discrimination against Women, which entered into force in 1981, imposes obligations on states to condemn discrimination against women and "prevent, investigate, prosecute and punish" acts of gender-based violence.Regional responses
Most regional instruments do not have specific articles prescribing prohibition of hate speech, but they more generally allow states to limit freedom of expression—which provisions can be applied to specific cases.American Convention on Human Rights
The American Convention on Human Rights describes limitations on freedom of expression in a manner similar to the ICCPR in Article 19. The Organization of American States has also adopted another declaration on the principles of freedom of expression, which includes a specific clause stating that "prior conditioning of expressions, such as truthfulness, timeliness or impartiality is incompatible with the right to freedom of expression recognized in international instruments". The Inter-American Court has advised that " the existence of previously established grounds for liability; b) the express and precise definition of these grounds by law; c) the legitimacy of the ends sought to be achieved; d) a showing that these grounds of liability are ‘necessary to ensure' the aforementioned ends." The Inter-American System has a Special Rapporteur on Freedom of Expression who conducted a comprehensive study on hate speech. His conclusion was that the Inter-American Human Rights System differs from the United Nations and the European approach on a key point: The Inter-American system covers only hate speech that actually leads to violence, and solely such speech can be restricted.African Charter on Human Rights and Peoples' Rights
The African Charter on Human Rights and Peoples' Rights takes a different approach in Article 9, allowing for restrictions on rights as long as they are "within the law". This concept has been criticized and there is a vast amount of legal scholarship on the so-called "claw-back" clauses and their interpretation. The criticism is mainly aimed at the fact that countries can manipulate their own legislation and weaken the essence of the right to freedom of expression. The Declaration of Principles on Freedom of Expression in Africa elaborates a higher standard for limitations on freedom of expression. It declares that the right "should not be restricted on public order or national security grounds unless there is a real risk of harm to a legitimate interest and there is a close causal link between the risk of harm and the expression".Cairo Declaration on Human Rights in Islam
In 1990, the Organization of the Islamic Conference adopted the Cairo Declaration on Human Rights in Islam, which calls for criminalisation of speech that extends beyond cases of imminent violence to encompass "acts or speech that denote manifest intolerance and hate".Arab Charter on Human Rights
The Arab Charter on Human Rights, which was adopted by the Council of the League of Arab States in 2004, includes in Article 32 provisions that are relevant also for online communication as it guarantees the right to "freedom of opinion and expression, and the right to seek, receive and impart information and ideas through any medium, regardless of geographical boundaries". It allows a limitation on a broad basis in paragraph 2 "Such rights and freedoms shall be exercised in conformity with the fundamental values of society".ASEAN Human Rights Declaration
The ASEAN Human Rights Declaration includes the right to freedom of expression in Article 23. Article 7 of the Declaration provides for general limitations, affirming, "the realisation of human rights must be considered in the regional and national context bearing in mind different political, economic, legal, social, cultural, historical and religious backgrounds."Charter of Fundamental Rights of the European Union
The Charter of Fundamental Rights of the European Union which declares the right to freedom of expression in Article 11, has a clause which prohibits abuse of rights. It asserts that the Charter must not be interpreted as implying any "limitation to a greater extent than is provided for therein". An example of a limitation which implies a strict test of necessity and proportionality is the provision on freedom of expression in the European Convention on Human Rights, which underlines that the exercise of freedom of expression carries duties and responsibilities. It "may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary".The European Court of Human Rights is careful to distinguish between hate speech and the right of individuals to express their views freely, even if others take offence. There are regional instances relevant specifically to online hate speech. The Council of Europe in 2000 issued a General Policy Recommendation on Combating the Dissemination of Racist, Xenophobic and Anti-Semitic Material via the Internet. The creation of the CoE Convention on Cybercrime in 2001, which regulates mutual assistance regarding investigative powers, provides signatory countries with a mechanism to deal with computer data, which would include transnational hate speech online. In 2003 the CoE launched an additional protocol to the Convention on Cybercrime which addresses online expression of racism and xenophobia. The convention and its protocol were opened for signature and ratification of countries outside Europe, and other countries, such as Canada and South Africa, are already part of this convention. The Protocol imposes an obligation on Member States to criminalise racist and xenophobic insults online of " persons for the reason that they belong to a group distinguished by race, color, descent or national or ethnic origin, as well as religion, if used as a pretext for any of these factors; or a group of persons which is distinguished by any of these characteristics".
Private spaces
intermediaries such as social networking platforms, Internet Service Providers or Search Engines, stipulate in their terms of service how they may intervene in allowing, restricting, or channelling the creation and access to specific content. A vast amount of online interactions occur on social networking platforms that transcend national jurisdictions and which platforms have also developed their own definitions of hate speech and measures to respond to it. For a user who violates the terms of service, the content he or she has posted may be removed from the platform, or its access may be restricted to be viewed only by a certain category of users.The principles that inspire terms of service agreements and the mechanisms that each company develops to ensure their implementation have significant repercussions on the ability that people have to express themselves online as well as to be protected from hate speech. Most intermediaries have to enter in negotiations with national governments to an extent that varies according to the type of intermediary, areas where the company is registered, and the legal regime that applies. As Tsesis explains, "f transmissions on the Internet are sent and received in particular locations, then specific fora retain jurisdiction to prosecute illegal activities transacted on the Internet". Internet Service Providers are the most directly affected by national legislation because they have to be located in a specific country to operate. Search Engines, while they can modify search results for self-regulatory or commercial reasons, have increasingly tended to adapt to the intermediary liability regime of both their registered home jurisdictions and other jurisdictions in which they provide their services, either removing links to content proactively or upon request by authorities.
All Internet intermediaries operated by private companies are also expected to respect human rights. This is set out in the Guiding Principles on Business and Human Rights elaborated by the United Nations Office of the High Commissioner for Human Rights. The document emphasizes corporate responsibility in upholding human rights. In principle 11, it declares that: "Business enterprises should respect human rights. This means that they should avoid infringing on the human rights of others and should address adverse human rights impacts with which they are involved". The United Nations Guiding Principles also indicate that in cases in which human rights are violated, companies should "provide for or cooperate in their remediation through legitimate processes". In the case of Internet intermediaries and conceptions of hate speech, this means that they should ensure that measures are in place to provide a commensurate response.
Social responses
Civil society
is transitioning towards greater openness and access to the Internet has grown at unprecedented rates. In this context, however, social media have often been used by some to spread calls to violence. In 2014, the UN Human Rights Council Special Rapporteur on Minority Issues expressed her concern over the spread of misinformation, hate speech and incitement to violence, discrimination and hostility in the media and Internet, particularly targeted against a minority community. The growing tension online has gone parallel with cases of actual violence leaving hundreds dead and thousands displaced. One challenge in this process has concerned ethnic and religious minorities. In 2013, 43 people were killed due to clashes that erupted after a dispute in the Rakhine state in the Western Part of the country. A year earlier, more than 200 people were killed and thousands displaced 37 because of ethnic violence, which erupted after an alleged rape case. Against this backdrop, the rapid emergence of new online spaces, albeit for a fraction of the population, has reflected some of these deeply rooted tensions in a new form.Dealing with intolerance and hate speech online is an emerging issue. Facebook has rapidly become the platform of choice for those citizens making their first steps online. In this environment there have been individual and groups, which have championed a more aggressive use of the medium, especially when feeling protected by a sense of righteousness and by claims to be acting in defense of the national interest. Political figures have also used online media for particular causes. In social media, there has been the use of derogatory terms in reference to minorities. In this complex situation, a variety of actors has begun to mobilize, seeking to offer responses that can avoid further violence. Facebook has sought to take a more active role in monitoring the uses of the social network platform in Myanmar, developing partnerships with local organizations and making guidelines on reporting problems accessible in Burmese.
The local civil society has constituted a strong voice in openly condemning the spread of online hate speech, but at the same time calling for alternatives to censorship. Among the most innovative responses has been Panzagar, which in Burmese means "flower speech", a campaign launched by blogger and activist Nay Phone Latt to openly oppose hate speech. The goal of the initiative was offering a joyful example of how people can interact, both online and offline. Local activists have been focussed upon local solutions, rather than trying to mobilize global civil society on these issues. This is in contrast to some other online campaigns that have been able to attract the world's attention towards relatively neglected problems. Initiatives such as those promoted by the Save Darfur Coalition for the civil war in Sudan, or the organization Invisible Children with the Kony2012 campaign that denounced the atrocities committed by the Lord Resistance Army, are popular examples. As commentaries on these campaigns have pointed out, such global responses may have negative repercussions on the ability for local solutions to be found.
Private companies
Internet intermediaries have developed disparate definitions of hate speech and guidelines to regulate it. Some companies do not use the term hate speech, but have a descriptive list of terms related to it.Yahoo!
's terms of service prohibit the posting of "content that is unlawful, harmful, threatening, abusive, harassing, tortuous, defamatory, vulgar, obscene, libellous, invasive of another's privacy, hateful, or racially, ethnically or otherwise objectionable".Punishments for violations range from suspending a user's ability to tweet until they take down their offensive/ hateful post to the removal of an account entirely. In a statement following the implementation of their new policies, Twitter said “In our efforts to be more aggressive here, we may make some mistakes and are working on a robust appeals process”... “We’ll evaluate and iterate on these changes in the coming days and weeks, and will keep you posted on progress along the way”. These changes come amidst a time where action is being taken to prevent hate speech around the globe, including new laws in Europe which pose fines for sites unable to address hate speech reports within 24 hours.
YouTube
, a subsidiary of the tech company Google, has outlined a clear “Hate Speech Policy” amidst several other user policies on their website. The policy is worded as such: “We encourage free speech and try to defend your right to express unpopular points of view, but we don't permit hate speech. Hate speech refers to content that promotes violence against or has the primary purpose of inciting hatred against individuals or groups based on certain attributes, such as: race or ethnic origin, religion, disability, gender, age, veteran status, sexual orientation/gender identity”. YouTube has built in a user reporting system in order to counteract the growing trend of hate speech. Among the most popular deterrents against hate speech, users are able to anonymously report another user for content they deem inappropriate. The content is then reviewed against YouTube policy and age restrictions, and either taken down or left alone.Facebook's hate speech policies are enforced by 7,500 content reviewers. Because this requires difficult decision making, controversy arises among content reviewers over enforcement of policies. Some users seem to feel as though the enforcement is inconsistent. One apt past example is two separate but similarly graphic postings that wished death to members of a specific religion. Both post were flagged by users and reviewed by Facebook staff. However, only one was removed even though they carried almost identical sentiments. In a quote regarding hate speech on the platform, Facebook Vice President of Global Operations, Justin Osofky stated, “We’re sorry for the mistakes we have made — they do not reflect the community we want to help build…We must do better."
There has been additional controversy due to the specificity of Facebook's hate speech policies. On many occasions there have been reports of status updates and comments that users feel are insensitive and convey hatred. However these posts do not technically breach any Facebook policies because their speech does not attack others based on the company's list of protected classes. For example, the statement “Female sports reporters need to be hit in the head with hockey pucks," would not be considered hate speech on Facebook's platform and therefore would not be removed. While the company protects against gender based hatred, it does not protect against hatred based on occupation.
Facebook also tries to accommodate users who promote other hate speech content with the intent of criticizing it. In these cases, users are required make it clear that their intention is to educate others. If this intention is unclear then Facebook reserves the right to censor the content. When Facebook initially flags content that may contain hate speech, they then designate it to a Tier 1, 2, and 3 scale, based on the content's severity. Tier 1 is the most severe and Tier 3 is the least. Tier 1 includes anything that conveys “violent speech or support for death/disease/harm.” Tier 2 is classified as content that slanders another user's image mentally, physically, or morally. Tier 3 includes anything that can potentially exclude or discriminate against others, or that uses slurs about protected groups, but does not necessarily apply to arguments to restrict immigration or criticism of existing immigration policies.
In March 2019, Facebook banned content supporting white nationalism and white separatism, extending a previous ban of white supremacy content. In May 2019, it announced bans on several prominent people for violations of its prohibition on hate speech, including Alex Jones, Louis Farrakhan, Milo Yiannopoulos, Laura Loomer, and Paul Nehlen.
Microsoft
has specific rules concerning hate speech for a variety of its applications. Its policy for mobile phones prohibits applications that "contain any content that advocates discrimination, hatred, or violence based on considerations of race, ethnicity, national origin, language, gender, age, disability, religion, sexual orientation, status as a veteran, or membership in any other social group." The company has also rules regarding online gaming, which prohibit any communication that is indicative of "hate speech, controversial religious topics and sensitive current or historical events".Media and information literacy
aims to help people to engage in a digital society by being able to use, understand, inquire, create, communicate and think critically; while being able to effectively access, organize, analyze, evaluate, and create messages in a variety of forms.Citizenship education focuses on preparing individuals to be informed and responsible citizens through the study of rights, freedoms, and responsibilities and has been variously employed in societies emerging from violent conflict. One of its main objectives is raising awareness on the political, social and cultural rights of individuals and groups, including freedom of speech and the responsibilities and social implications that emerge from it. The concern of citizenship education with hate speech is twofold: it encompasses the knowledge and skills to identify hate speech, and should enable individuals to counteract messages of hatred. One of its current challenges is adapting its goals and strategies to the digital world, providing not only argumentative but also technological knowledge and skills that a citizen may need to counteract online hate speech.
Information literacy cannot avoid issues such as rights to free expression and privacy, critical citizenship and fostering empowerment for political participation. Multiple and complementary literacies become critical. The emergence of new technologies and social media has played an important role in this shift. Individuals have evolved from being only consumers of media messages to producers, creators and curator of information, resulting in new models of participation that interact with traditional ones, like voting or joining a political party. Teaching strategies are changing accordingly, from fostering critical reception of media messages to include empowering the creation of media content.
The concept of media and information literacy itself continues to evolve, being augmented by the dynamics of the Internet. It is beginning to embrace issues of identity, ethics and rights in cyberspace. Some of these skills can be particularly important when identifying and responding to hate speech online.
Series of initiatives aimed both at providing information and practical tools for Internet users to be active digital citizens:
- ‘No place for hate' by Anti-Defamation League, United States;
- ‘In other words' project by Provincia di Mantova and the European Commission;
- ‘Facing online hate' by MediaSmarts, Canada;
- ‘No hate speech movement' by Youth Department of the Council of Europe, Europe;
- ‘Online hate' by the Online Hate Prevention Institute, Australia.