On Wednesday 2 December, it was announced that Timnit Gebru, a leading AI ethics researcher at Google, had been fired from her post. The news shocked the AI research community as Gebru, one of the leading voices in responsible AI research, is known for her groundbreaking work in revealing the discriminatory nature of facial recognition, co-founding the Black in AI affinity group, and relentlessly advocating for diversity in the tech industry. What was Gebru fired for exactly?
Gebru announced on Twitter that she had been terminated from her position as Google’s ethical AI co-lead. “Apparently my manager’s manager sent an email to my direct reports saying she accepted my resignation. I hadn’t resigned,” she explained.
Following the news, in an interview with Bloomberg on Thursday 3 December, Gebru said that the firing happened after a protracted fight with her superiors over the publication of a specific AI ethics research paper. One of Gebru’s tweets along with an internal email from Jeff Dean, head of Google AI, both suggest that the paper was critical of the environmental costs and embedded biases of training an AI model.
Gebru’s paper in question, which had been written with four Google colleagues and two external collaborators, was submitted to a research conference being held next year. After an internal review, she was asked to retract the paper or remove the names of the Google employees. She responded that she would do so if her superiors met a series of conditions. If they could not, she would “work on a last date,” she said.
Gebru also sent a frustrated email to Google Brain Women and Allies (which Platformer managed to obtain) detailing the constant hardships she had experienced as a Black female researcher. “We just had a Black research all hands with such an emotional show of exasperation,” she wrote. “Do you know what happened since? Silencing in the most fundamental way possible.”
After that, while Gebru went on a vacation, she received a termination email from Megan Kacholia, the VP of engineering at Google Research. “Thanks for making your conditions clear,” the email stated, as tweeted by Gebru. “We cannot agree to #1 and #2 as you are requesting. We respect your decision to leave Google as a result, and we are accepting your resignation.”
Following Gebru’s tweets and the support she received online, Dean sent an internal email to Google’s AI group with his account of the situation. He said that Gebru’s paper “didn’t meet our bar for publication” because “it ignored too much relevant research.” He also added that Gebru’s conditions included “revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback.”
While many details surrounding the exact progression of events and cause of Gebru’s termination remain unclear, the sudden spotlight cast on her Twitter account has led to a renewed attention to one of her previous tweets that she has now pinned to the top of her profile. “Is there anyone working on regulation protecting Ethical AI researchers, similar to whistleblower protection?” it reads. “Because with the amount of censorship & intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?”
As unclear as some aspects of this series of events remain, it is clear to see the struggles that Gebru experienced as a Black leader working on ethics research within Google, and it presents a bleak view of the path forward for underrepresented minorities at the company.