On Wednesday 2 December, it was announced that Timnit Gebru, a leading AI ethics researcher at Google, had been fired from her post. The news shocked the AI research community as Gebru, one of the leading voices in responsible AI research, is known for her groundbreaking work in revealing the discriminatory nature of facial recognition, co-founding the Black in AI affinity group, and relentlessly advocating for diversity in the tech industry. What was Gebru fired for exactly?
Gebru announced on Twitter that she had been terminated from her position as Google’s ethical AI co-lead. “Apparently my manager’s manager sent an email to my direct reports saying she accepted my resignation. I hadn’t resigned,” she explained.
Apparently my manager’s manager sent an email my direct reports saying she accepted my resignation. I hadn’t resigned—I had asked for simple conditions first and said I would respond when I’m back from vacation. But I guess she decided for me :) that’s the lawyer speak.
— @[email protected] on Mastodon (@timnitGebru) December 3, 2020
Following the news, in an interview with Bloomberg on Thursday 3 December, Gebru said that the firing happened after a protracted fight with her superiors over the publication of a specific AI ethics research paper. One of Gebru’s tweets along with an internal email from Jeff Dean, head of Google AI, both suggest that the paper was critical of the environmental costs and embedded biases of training an AI model.
Thanks @red_abebe. @JeffDean I realize how much large language models are worth to you now. I wouldn’t want to see what happens to the next person who attempts this. https://t.co/rIBXYPvQ3d
— @[email protected] on Mastodon (@timnitGebru) December 3, 2020
Gebru’s paper in question, which had been written with four Google colleagues and two external collaborators, was submitted to a research conference being held next year. After an internal review, she was asked to retract the paper or remove the names of the Google employees. She responded that she would do so if her superiors met a series of conditions. If they could not, she would “work on a last date,” she said.
I said here are the conditions. If you can meet them great I’ll take my name off this paper, if not then I can work on a last date. Then she sent an email to my direct reports saying she has accepted my resignation. So that is google for you folks. You saw it happen right here.
— @[email protected] on Mastodon (@timnitGebru) December 3, 2020
Gebru also sent a frustrated email to Google Brain Women and Allies (which Platformer managed to obtain) detailing the constant hardships she had experienced as a Black female researcher. “We just had a Black research all hands with such an emotional show of exasperation,” she wrote. “Do you know what happened since? Silencing in the most fundamental way possible.”
After that, while Gebru went on a vacation, she received a termination email from Megan Kacholia, the VP of engineering at Google Research. “Thanks for making your conditions clear,” the email stated, as tweeted by Gebru. “We cannot agree to #1 and #2 as you are requesting. We respect your decision to leave Google as a result, and we are accepting your resignation.”
Following Gebru’s tweets and the support she received online, Dean sent an internal email to Google’s AI group with his account of the situation. He said that Gebru’s paper “didn’t meet our bar for publication” because “it ignored too much relevant research.” He also added that Gebru’s conditions included “revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback.”
While many details surrounding the exact progression of events and cause of Gebru’s termination remain unclear, the sudden spotlight cast on her Twitter account has led to a renewed attention to one of her previous tweets that she has now pinned to the top of her profile. “Is there anyone working on regulation protecting Ethical AI researchers, similar to whistleblower protection?” it reads. “Because with the amount of censorship & intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?”
As unclear as some aspects of this series of events remain, it is clear to see the struggles that Gebru experienced as a Black leader working on ethics research within Google, and it presents a bleak view of the path forward for underrepresented minorities at the company.
The internet lives forever, which is probably one of the most frustrating things about it—if you’re unlucky enough to have the same name as a criminal, or used to engage in youthful antics, it’s likely that your internet reputation may follow you around for the rest of your life. Since 2014, Europeans have had the right to request links to pages containing sensitive personal information about them to be removed, under the rule called the ‘right to be forgotten’, which would only apply to EU citizens.
But even if that information got delisted in the EU, it appeared that delisted links were still available online to people outside of the EU. This knowledge led to a dispute between Google and the French privacy regulator CNIL, who tried to impose a fine after Google didn’t completely remove listings containing damaging information about people. Google then challenged this as the search giant said it had no obligation to remove information outside of the EU, which led to the case being referred up to the European Court of Justice.
On 24 September, Google won the legal ruling that would let it bypass this legislation outside of the EU, arguing that it could be used in harmful or nefarious ways by authoritarian governments.
The ‘right to be forgotten’ has always lacked clear structure, but it is based on a couple of fundamental principles. In 2013, the first draft legislation was introduced in the EU to make it possible for individuals to have information about themselves scrubbed from the internet. For that to happen, you have to be a resident of the EU, and then you, or someone representing you, can put in a request for the removal of URLs that are believed to be a violation of your privacy.
At this point, the validity of the request will, of course, be assessed for legitimacy before it’s approved. Google’s teams look at whether the information contained, such as a high school indiscretion, like a silly website, is irrelevant or inadequate or if it is just necessary to be made publicly available to whoever is making the request. If it is approved, it doesn’t mean that this information about someone is completely erased either, it just means that this information won’t show up if you google that specific person. In order to make sure that the person who is having this information appealed is actually involved in the process, some kind of identification has to be provided too.
In turn, Google has formed a significant advisory board to advise the company on what to do and how to regulate these demands. While Google has often said it prioritises the privacy of the people who use the search engine (there are roughly 3.5 billion searches every day), the company has long since said that it doesn’t necessarily agree with the motivations of this legislation. If Google refuses to comply or asserts that someone’s removal request is illegitimate, then people have to appeal to their local data protection agency, and these processes can be even more complicated. And still after going through all that, some websites and news agencies have often published the links that had been delisted, sometimes including the names of the people who were making the request in the first place, raising thorny questions around the public interest.
Google says that to date, it has received millions of requests for removal, and that the ultimate decision around removal is still carried out by a human, because of how crucial context is in many of these situations. Yet, the problem still remains—while Google might not display this information anymore, other websites outside of the EU still might, and it means that a website with information that someone wants removed will not always be completely erased.
In some ways, the right to be forgotten legislation was one of the early precursors to the General Data Protection Regulation (GDPR), put in place in 2018, on top of the right to be forgotten. That’s why there have been some issues around how it was actually implemented. For example, in 2015, there was outrage after it was revealed that the past, botched operations of a British doctor had been removed from Google (after the doctor requested it to be so), despite the obvious public interest. While this is a piece of EU legislation, the effects of right to be forgotten have been evoked in courts in the U.S., and Consumer Watchdog, an organisation that advocates for the rights of consumers in the U.S., has also filed with a request for this right to be obtained in the U.S. as well.
Much of the criticism of the right to be forgotten ruling has centred on the threat it could pose to the right to freedom of speech, which is also a line of argument that Google pursued. The company argued that by delisting URLs around the world, it could give authoritarian governments unduly power, or could have harmful effects on the dissemination of information in society. Other tech companies, like Wikipedia and Microsoft, also supported Google’s campaign.
Crucially, the EU has continued to push for delinkings requested by EU citizens to also be carried forward into other Google domains, past just the European ones. This right to be forgotten legislation has only been applied in the EU, and the ruling last week demonstrated that it will remain that way. Google won.
However, the ruling did say that delisting must be accompanied by a serious attempt to encourage internet users not to access the information elsewhere, even if it’s not accessible within the EU. And even if the U.K. does leave the EU, the rules will still apply (for now). So if you have a dark past hidden somewhere online and wish for it to be deleted, send your request very soon, before it’s too late.