[3] In national and international legislation, hate speech refers to expressions that advocate incitement to harm, including acts of discrimination, hostility, radicalization, verbal and/or physical violence, based upon the targets' social and/or demographic identity.
As Andre Oboler, the CEO of the Online Hate Prevention Institute, has noted, "The longer the content stays available, the more damage it can inflict on the victims and empower the perpetrators.
Facebook, on the contrary, may allow multiple threads to continue in parallel and go unnoticed; creating longer lasting spaces that offend, discriminate, and ridicule certain individuals and groups.
If a website is shut down, it can quickly reopen using a web-hosting service with less stringent regulations or via the reallocation to a country with laws imposing higher threshold for hate speech.
The itinerant nature of hate speech also means that poorly formulated thoughts, or under-the-influence behavior, that would have not found public expression and support in the past may now land on spaces where they can be visible to large audiences.
[12] Social media has also provided a platform for radical and extremist political or religious groups to form, network, and collaborate to spread their messages of anti-establishment and anti-political correctness, and promote beliefs and ideologies that are racist, anti-feminist, homophobic, transphobic, etc.
[3] A further complication is the transnational reach of the Internet, raising issues of cross jurisdictional co-operation in regard to legal mechanisms for combating hate speech.
The transnational reach of many private-sector Internet intermediaries may provide a more effective channel for resolving issues in some cases, although these bodies are also often impacted upon by cross-jurisdictional appeals for data (such as revealing the identity of the author(s) of a particular content).
[16] For example, a user might post or comment something that classifies as hate speech, or violates community guidelines, but if the target word is misspelled, or some letters are replaced with symbols, the AI systems will not recognize it.
"[21] De Koster and Houtman surveyed only one national chapter of Stormfront and a non-representative sample of users, but answers like those above should at least invite to caution towards hypotheses connecting expressions and actions, even in spaces whose main function is to host extremist views.
While states have an obligation to prohibit speech conceived as "advocacy to hatred that constitutes incitement to discrimination, hostility or violence", as consistent with Article 20 (2),[29] how to interpret such is not clearly defined.
Article 7 of the Declaration provides for general limitations, affirming, "the realization of human rights must be considered in the regional and national context bearing in mind different political, economic, legal, social, cultural, historical and religious backgrounds.
[47] The creation of the CoE Convention on Cybercrime in 2001, which regulates mutual assistance regarding investigative powers, provides signatory countries with a mechanism to deal with computer data, which would include transnational hate speech online.
[3] The principles that inspire terms of service agreements and the mechanisms that each company develops to ensure their implementation have significant repercussions on the ability that people have to express themselves online as well as to be protected from hate speech.
"[55] One of the most notable "clans," Puerto Reekan Killaz, have created an online gaming space where Black and Latina women of the LGBTQIA+ community can play without risk of racism, nativism, homophobia, sexism, and sexual harassment.
"[58] Online hate speech and cyberbullying against religious and ethnic minorities, women, and other socially marginalized groups have long been an issue that is downplayed and/or ignored in the Islamic Republic of Pakistan.
Facebook has sought to take a more active role in monitoring the uses of the social network platform in Myanmar, developing partnerships with local organizations and making guidelines on reporting problems accessible in Burmese.
[72][3] The local civil society has constituted a strong voice in openly condemning the spread of online hate speech, but at the same time calling for alternatives to censorship.
Initiatives such as those promoted by the Save Darfur Coalition for the civil war in Sudan, or the organization Invisible Children with the Kony2012 campaign that denounced the atrocities committed by the Lord Resistance Army, are popular examples.
[75] According to The Network Against Hate Speech, many Facebook posts called for "genocidal attacks against an ethnic group or a religion — or both at the same time; and ordering people to burn civilians' properties, kill them brutally, and displace them.
's terms of service prohibit the posting of "content that is unlawful, harmful, threatening, abusive, harassing, tortuous, defamatory, vulgar, obscene, libellous, invasive of another's privacy, hateful, or racially, ethnically or otherwise objectionable.
Twitter's definition of hate speech ranges from "violent threats" and "wishes for the physical harm, death, or disease of individuals or groups" to "repeated and/or non-consensual slurs, epithets, racist and sexist tropes, or other content that degrades someone."
Hate speech refers to content that promotes violence against or has the primary purpose of inciting hatred against individuals or groups based on certain attributes, such as: race or ethnic origin, religion, disability, gender, age, veteran status, sexual orientation/gender identity".
[91] In a quote regarding hate speech on the platform, Facebook Vice President of Global Operations, Justin Osofky stated, "We’re sorry for the mistakes we have made — they do not reflect the community we want to help build…We must do better.
[96] In May 2019, it announced bans on several prominent people for violations of its prohibition on hate speech, including Alex Jones, Louis Farrakhan, Milo Yiannopoulos, Laura Loomer, and Paul Nehlen.
Its policy for mobile phones prohibits applications that "contain any content that advocates discrimination, hatred, or violence based on considerations of race, ethnicity, national origin, language, gender, age, disability, religion, sexual orientation, status as a veteran, or membership in any other social group.
"[103] The company has also rules regarding online gaming, which prohibit any communication that is indicative of "hate speech, controversial religious topics and sensitive current or historical events.
Researcher Robert Mark Simpson concluded that combatting hate speech on youth-targeted media "might bear more of a resemblance to regulations governing adult entertainment than to prohibitions on Holocaust denial.
[109] One of its current challenges is adapting its goals and strategies to the digital world, providing not only argumentative but also technological knowledge and skills that a citizen may need to counteract online hate speech.
Individuals have evolved from being only consumers of media messages to producers, creators and curator of information, resulting in new models of participation that interact with traditional ones, like voting or joining a political party.