Timnit Gebru

Timnit Gebru (Amharic and Tigrinya: ትምኒት ገብሩ; 1982/1983) is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining.

In December 2020, public controversy erupted over the circumstances surrounding Gebru's departure from Google, where she was technical co-lead of the Ethical Artificial Intelligence Team.

Gebru had coauthored a paper on the risks of large language models (LLMs) acting as stochastic parrots, and submitted it for publication.

Gebru settled in Somerville, Massachusetts, to attend high school, where she says she immediately started to experience racial discrimination, with some teachers refusing to allow her to take certain Advanced Placement courses, despite being a high-achiever.

[9] In the paper, she scathed the "boy's club culture", reflecting on her experiences at conference gatherings of drunken male attendees sexually harassing her, and criticized the hero worship of the field's celebrities.

[7] Gebru joined Apple as an intern while at Stanford, working in their hardware division making circuitry for audio components, and was offered a full-time position the following year.

[22] Gebru also criticized the way the media covers Apple and other tech giants, saying that the press helps shield such companies from public scrutiny.

[23] To investigate alternatives, Gebru combined deep learning with Google Street View to estimate the demographics of United States neighbourhoods, showing that socioeconomic attributes such as voting patterns, income, race, and education can be inferred from observations of cars.

[31] In the summer of 2017, Gebru joined Microsoft as a postdoctoral researcher in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) lab.

[35] In 2019, Gebru and other artificial intelligence researchers "signed a letter calling on Amazon to stop selling its facial-recognition technology to law enforcement agencies because it is biased against women and people of color", citing a study that was conducted by MIT researchers showing that Amazon's facial recognition system had more trouble identifying darker-skinned females than any other technology company's facial recognition software.

[36] In a New York Times interview, Gebru has further expressed that she believes facial recognition is too dangerous to be used for law enforcement and security purposes at present.

[38] In a six-page mail sent to an internal collaboration list, Gebru describes how she was summoned to a meeting at short notice where she was asked to withdraw the paper and she requested to know the names and reasons of everyone who made that decision, along with advice for how to revise it to Google's liking.

[40][9] Dean went on to publish his internal email regarding Gebru's departure and his thoughts on the matter, defending Google's research paper process to "tackle ambitious problems, but to do so responsibly."

Gebru was serially harassed by a number of sock puppet accounts and internet trolls on Twitter, making racist and obscene comments.

Gebru and her supporters alleged that some of the harassment came from machine learning researcher Pedro Domingos and businessman Michael Lissack, who had said that her work was "advocacy disguised as science".

[49] Mitchell took to Twitter to criticize Google's treatment of employees working to eliminate bias and toxicity in AI, including its alleged dismissal of Gebru.

[50] Additionally, Dean said there would be changes to how research papers with "sensitive" topics would be reviewed, and diversity, equity, and inclusion goals would be reported to Alphabet's board of directors quarterly.

[59][60] The proposal followed a less formal request from a group of Senate Democratic Caucus members led by Cory Booker from earlier that year, also citing Gebru's separation from the company and her work.

[61] In December 2021, Reuters reported that Google was under investigation by California Department of Fair Employment and Housing (DFEH) for its treatment of Black women,[62] after numerous formal complaints of discrimination and harassment by current and former workers.

[65][66] One of the organization's initial projects plans to analyze satellite imagery of townships in South Africa with AI to better understand legacies of apartheid.

Gebru and Émile P. Torres coined the acronym neologism TESCREAL to criticize what they see as a group of overlapping futurist philosophies: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism.

Gebru considers these to be a right-leaning influence in Big Tech and compares proponents to "the eugenicists of the 20th century" in their production of harmful projects they portray as "benefiting humanity".

Gebru discussing her findings that one can predict, with some reliability, the way an American will vote from the type of vehicle they drive