Primarily, the reports revealed that, based on internally-commissioned studies, the company was fully aware of negative impacts on teenage users of Instagram, and the contribution of Facebook activity to violence in developing countries.
[1][2][3] After publicly revealing her identity on 60 Minutes,[4][5] Haugen testified before the U.S. Senate Commerce Subcommittee on Consumer Protection, Product Safety, and Data Security about the content of the leaked documents and the complaints.
Revelations included reporting of special allowances on posts from high-profile users ("XCheck"), subdued responses to flagged information on human traffickers and drug cartels, a shareholder lawsuit concerning the cost of Facebook (now Meta) CEO Mark Zuckerberg's personal liability protection in resolving the Cambridge Analytica data scandal, an initiative to increase pro-Facebook news within user news feeds, and internal knowledge of how Instagram exacerbated negative self-image in surveyed teenage girls.
[9] Siva Vaidhyanathan wrote for The Guardian that the documents were from a team at Facebook "devoted to social science and data analytics that is supposed to help the company's leaders understand the consequences of their policies and technological designs.
[12] Although Facebook claimed earlier that its rules applies equally to everyone on the platform, internal documents shared with The Wall Street Journal point to special policy exceptions reserved for VIP users, including celebrities and politicians.
[16] Beginning October 22, a group of news outlets began publishing articles based on documents provided by Haugen's lawyers, collectively referred to as The Facebook Papers.
[17][18] The New York Times pointed to internal discussions where employees raised concerns that Facebook was spreading content about the QAnon conspiracy theory more than a year before the 2020 United States elections.
[21][22] Another of the whistleblower complaints Haugen filed with the SEC alleged that the company misled investors and the general public about enforcement of its terms of service due to such whitelisting under the XCheck program.
[29] Documents reveal Facebook has responded to these incidents by removing posts which violate their policy, but has not made any substantial efforts to prevent repeat offenses.
In 2015, in addition to the Like button on posts, Facebook introduced a set of other emotional reaction options: love, haha, yay, wow, sad and angry.
Results of the study showed that within three weeks, the fake account's newsfeed was being presented pornography and "filled with polarizing and graphic content, hate speech and misinformation", according to an internal company report.
[38][39] In 2021, Facebook developed a new strategy for addressing harmful content on their site, implementing measures which were designed to reduce and suppress the spread of movements that were deemed hateful.
[40] According to The Wall Street Journal, documents show that in 2019, Facebook reduced the time spent by human reviewers on hate-speech complaints, shifting towards a stronger dependence on their artificial intelligence systems to regulate the matter.
However, internal documents from employees claim that their AI has been largely unsuccessful, seeing trouble detecting videos of cars crashing, cockfighting, as well as understanding hate speech in foreign languages.
"[54] In December 2021, news broke on The Wall Street Journal pointing to Meta's lobbying efforts to divide US lawmakers and "muddy the waters" in Congress, to hinder regulation following the 2021 whistleblower leaks.
According to the article, the company's goal was to "muddy the waters, divide lawmakers along partisan lines and forestall a cross-party alliance" against Facebook (now Meta) in Congress.