[13][14] Many social media platforms have acknowledged this path of radicalization and have taken measures to prevent it, including the removal of extremist figures and rules against hate speech and misinformation.
Influence from external sources such as the internet can be gradual so that the individual is not immediately aware of their changing understanding or surroundings.
The use of racist imagery or humor may be used by these individuals under the guise of irony or insincerity to make alt-right ideas palpable and acceptable to newer audiences.
This is facilitated through an "Alternative Influence Network", in which various right-wing scholars, pundits, and internet personalities interact with one another to boost performance of their content.
This allows newer audiences to be exposed to extreme content when videos that promote misinformation and conspiracy theories gain traction.
[14] Major personalities in this chain often have a presence on Facebook and Twitter, though YouTube is typically their primary platform for messaging and earning income.
Harvard Political Review has described this process as the "exploitation of latent misogyny and sexual frustration through 'male bonding' gone horribly awry".
[12] Alt-right content on the internet spreads ideology that is similar to earlier white supremacist and fascist movements.
The internet packages the ideology differently, often in a way that is more palatable and thus is more successful in delivering it to a larger number of people.
Because people can control who and what they engage with online, they can avoid hearing any opinion or idea that conflicts with what their prior beliefs.
The strong sense of community and belonging that comes with it is a large contributing factor for people joining the alt-right and adopting it as an identity.
This has complicated efforts by experts to track extremism and predict acts of domestic terrorism, as there is no reliable way of determining who has been radicalized or whether they are planning to carry out political violence.
On YouTube, content that expresses support of extremism may have monetization features removed, may be flagged for review, or may have public user comments disabled.
[26] An August 2019 study conducted by the Universidade Federal de Minas Gerais and École polytechnique fédérale de Lausanne, and presented at the ACM Conference on Fairness, Accountability, and Transparency 2020 used information from the earlier Data & Society research and the Anti-Defamation League (ADL) to categorize the levels of extremism of 360 YouTube channels.
[2][5][6][7] A 2020 study published in The International Journal of Press/Politics argued that the "emerging journalistic consensus" that YouTube's algorithm radicalizes users to the far-right "is premature."
Instead, the study found that "consumption of political content on YouTube appears to reflect individual preferences that extend across the web as a whole.
"[8] A 2022 study published by the City University of New York found that "little systematic evidence exists to support" the claim that YouTube's algorithm radicalizes users, adding that exposure to extremist views "on YouTube is heavily concentrated among a small group of people with high prior levels of gender and racial resentment", and that "non-subscribers are rarely recommended videos from alternative and extremist channels and seldom follow such recommendations when offered.