[1] According to Microsoft, this was caused by trolls who "attacked" the service as the bot made replies based on its interactions with people on Twitter.
The bot was created by Microsoft's Technology and Research and Bing divisions,[3] and named "Tay" as an acronym for "thinking about you".
[6] Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.
[9] Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes.
[6] Some Twitter users began tweeting politically incorrect phrases, teaching it inflammatory messages revolving around common themes on the internet, such as "redpilling" and "Gamergate".
"[17][18] Madhumita Murgia of The Telegraph called Tay "a public relations disaster", and suggested that Microsoft's strategy would be "to label the debacle a well-meaning experiment gone wrong, and ignite a debate about the hatefulness of Twitter users."
[24] A few hours after the incident, Microsoft software developers announced a vision of "conversation as a platform" using various bots and programs, perhaps motivated by the reputation damage done by Tay.
Microsoft has stated that they intend to re-release Tay "once it can make the bot safe"[4] but has not made any public efforts to do so.
[27] Gab, a social media platform, has launched a number of chatbots, one of which is named Tay and uses the same avatar as the original.