[6] In addition, AI may miss important context; for example, communities who aid people who struggle with self-harm, suicidal thoughts, or past sexual violence may inadvertently get caught in automated moderation.
In response, a TikTok spokeswoman told The New York Times that the users' fears are misplaced, saying that many popular videos discuss sex-adjacent topics.
[12] A 2022 poll showed that nearly a third of American social media users reported using "emojis or alternative phrases" to subvert content moderation.
[11] Algospeak uses techniques akin to those used in Aesopian language to conceal the intended meaning from automated content filters, while being understandable to human readers.
[3] Other similar adoption of obfuscated speech include Cockney rhyming slang and Polari, which were formerly used by London gangs and British gay men respectively.
A high-profile incident occurred when Italian actress Julia Fox made a seemingly unsympathetic comment on a Tiktok post mentioning "mascara", not knowing its obfuscated meaning of sexual assault.
[7][15] In an interview study, creators shared that the evolving nature of content moderation pressures them to constantly innovate their use of algospeak, which makes them feel less authentic.