Rage-baiting

[4][5][6][7] Rage-farming, which has been cited since at least January 2022, is an offshoot of rage-baiting where the outrage of the person being provoked is farmed or manipulated into an online engagement by rage-seeding that helps amplify the message of the original content creator.

Political scientist Jared Wesley of the University of Alberta stated in 2022 that the use of the tactic of rage farming was on the rise with right-wing politicians employing the technique by "promoting conspiracy theories and misinformation."

[13][14] The term rage bait, which has been cited since at least 2009, is a negative form of click-baiting as it relies on manipulating users to respond in kind to offensive, inflammatory "headlines", memes, tropes, or comments.

Algorithms on social media such as Facebook, Twitter, TikTok, Instagram, and YouTube were discovered to reward increased positive and negative engagement by directing traffic to posts and amplifying them.

[1] In an Atlantic article on Republican strategy, American writer Molly Jong-Fast described rage farming as "the product of a perfect storm of fuckery, an unholy mélange of algorithms and anxiety".

While the goal of some clickbait is to generate revenue, it can also be used as effective tactic to influence people on social media platforms, such as Facebook, Twitter, Instagram, and YouTube.

[15] A Westside Seattle Herald article published May 2016 cited the definition from the online Urban Dictionary, "a post on social media by a news organisation designed expressly to outrage as many people as possible in order to generate interaction.

"[4] A 2006 article in Time magazine described how Internet trolls post incendiary comments online with the sole purpose of provoking an argument even on the most banal topics.

[17] The example cited was a 15 December 2018 Irish digital media company ad falsely claiming that two thirds of people wanted Santa to be either female or gender neutral.

Facebook has been "blamed for fanning sectarian hatred, steering users toward extremism and conspiracy theories, and incentivizing politicians to take more divisive stands," according to a 2021 Washington Post report.

[30] One of Facebook's researchers raised concerns that the algorithms that rewarded "controversial" posts including those that incited outrage, could inadvertently result in more spam, abuse, and clickbait.

Algorithms also allow politicians to bypass legacy media outlets that fact-check, by giving them access to a targeted uncritical audience who are very receptive of their messaging, even when it is misinformation.

[19] By 2019, Facebook's data scientists confirmed that posts that incited the angry emoji were "disproportionately likely to include misinformation, toxicity and low-quality news.

[33] A 2024 Rolling Stone article discusses the rise of "rage-bait" influencers on TikTok who create content designed to provoke anger and generate engagement.

Influencers such as Winta Zesu and Louise Melcher produce staged, controversial videos that often go viral across multiple platforms, drawing in viewers who may not realize the content is fabricated.

[26] The company invested only 16% of its budget in fighting misinformation and hate speech in countries outside the United States, such as France, Italy, and India where English is not the maternal language.

[9] Since at least 2019, Facebook employees were aware of how "vulnerable these countries, like India, were to "abuse by bad actors and authoritarian regimes" but did nothing to block accounts that published hate speech and incited violence.

[9] In their 2019 434-page report submitted to the Office of the United Nations High Commissioner for Human Rights on the findings of the Independent International Fact-Finding Mission on Myanmar, the role of social media in disseminating hate speech and inciting violence in the anti-Muslim riots and the Rohingya genocide was investigated.