What Really Happened When Google Ousted Timnit Gebru

Canada News News

What Really Happened When Google Ousted Timnit Gebru
Canada Latest News,Canada Headlines
  • 📰 WIRED
  • ⏱ Reading Time:
  • 419 sec. here
  • 8 min. at publisher
  • 📊 Quality Score:
  • News: 171%
  • Publisher: 51%

“What Google just said to anyone who wants to do this critical research is, ‘We’re not going to tolerate it.' (From 2021)

But Google seemed to double down. Margaret Mitchell, the other coleader of the Ethical AI team and a prominent researcher in her own right, was among the hardest hit by Gebru’s ouster. The two had been a professional and emotional tag team, building up their group—which was one of several that worked on what Google called “responsible AI”—while parrying the sexist and racist tendencies they saw at large in the company’s culture.

Gebru’s career mirrored the rapid rise of AI fairness research, and also some of its paradoxes. Almost as soon as the field sprang up, it quickly attracted eager support from giants like Google, which sponsored conferences, handed out grants, and hired the domain’s most prominent experts. Now Gebru’s sudden ejection made her and others wonder if this research, in its domesticated form, had always been doomed to a short leash.

Reaching Ireland may have saved Gebru’s life, but it also shattered it. She called her mother and begged to be sent back to Ethiopia. “I don’t care if it’s safe or not. I can’t live here,” she said. Her new school, the culture, even the weather were alienating. Addis Ababa’s rainy season is staccato, with heavy downpours interspersed by sunshine. In Ireland, rain fell steadily for a week. As she took on the teenage challenges of new classes and bullying, larger concerns pressed down.

Gebru’s focus paid off. In September 2001 she enrolled at Stanford. Naturally, she chose the family major, electrical engineering, and before long her trajectory began to embody the Silicon Valley archetype of the immigrant trailblazer. For a course during her junior year, Gebru built an experimental electronic piano key, helping her win an internship at Apple making audio circuitry for Mac computers and other products.

In 2013 she joined the lab of Fei-Fei Li, a computer vision specialist who had helped spur the tech industry’s obsession with AI, and who would later work for a time at Google. Li had created a project called ImageNet that paid contractors small sums to tag a billion images scraped from the web with descriptions of their contents—cat, coffee cup, cello.

Gebru’s project fit in with what was becoming the industry’s new philosophy: Algorithms would soon automate away any problem, no matter how messy. But as Gebru got closer to graduation, the boundary she had established between her technical work and her personal values started to crumble in ways that complicated her feelings about the algorithmic future.

Li, Gebru’s adviser at Stanford, encouraged her to find a way to connect social justice and tech, the two pillars of her world­view. “It was obvious to an outsider, but I don’t think it was obvious to her, that actually there was a link between her true passion and her technical background,” Li says. Gebru was reluctant to forge that link, fearing in part that it would typecast her as a Black woman first and a technologist second.

The event’s creators initially found it difficult to convince peers that there was much to talk about. “The more predominant idea was that humans were biased and algorithms weren’t,” says Moritz Hardt, now a UC Berkeley computer science professor who cofounded the workshop with a researcher from Princeton. “People thought it was silly to work on this.”

For Gebru, the event could have been a waypoint between her grad school AI work and a job building moneymaking algorithms for tech giants. But she decided that she wanted to help contain the technology’s power rather than expand it. In the summer of 2017, she took a job with a Microsoft research group that had been involved in the FATML movement from early on.

When Mitchell got to Google, she discovered a messier reality behind the company’s entrée into fairness research. That first paper had been held up for months by internal deliberations over whether Google should publicly venture into a discourse on the discriminatory potential of computer code, which to managers seemed more complex and sensitive than its labs’ usual output.

The NIPS conference provided a look at the world of AI beyond her startup, but Raji didn’t see people like herself onstage or in the crowded lobby. Then an Afroed figure waved from across the room. It was Gebru. She invited Raji to the inaugural Black in AI workshop, an event born out of Gebru’s email list for Black researchers. Raji changed her plane ticket to stay an extra day in Long Beach and attend.

Gebru’s research was also helping to make work on AI fairness less academic and more urgent. In February 2018, as part of a project called Gender Shades, she and Buolamwini published evidence that services offered by companies including IBM and Microsoft that attempted to detect the gender of faces in photos were nearly perfect at recognizing white men, but highly inaccurate for Black women.

The Datasheets project bolstered Gebru’s prominence in the movement to scrutinize the ethics and fairness of AI. Mitchell asked her to think about joining her Ethical AI team at Google. Soon after, Gebru met with Dean again, this time with Mitchell at her side, for another discussion about the situation of women at Google. They planned a lunch meeting, but by the time the appointment rolled around, the two women were too anxious to eat. Mitchell alleged that she had been held back from promotions and raises by performance reviews that unfairly branded her as uncollaborative.

Mitchell also developed a playbook for turning ethical AI itself into a kind of product, making it more palatable to Google’s engineering culture, which prized launches of new tools and features. In January 2019, Mitchell, Gebru, and seven collaborators introduced a system for cataloging the performance limits of different algorithms.

At the same time, however, Mitchell and Gebru’s frustrations with Google’s broader culture mounted. The two women say they were worn down by the occasional flagrantly sexist or racist incident, but more so by a pervasive sense that they were being isolated. They noticed that they were left out of meetings and off email threads, or denied credit when their work made an impact. Mitchell developed an appropriately statistical way of understanding the phenomenon.

According to the Google employee, the incident—which is also described in anonymous posts on Reddit—showed how Gebru’s demeanor could make some ­people shy away from her or avoid certain technical topics for fear of being pulled into arguments about race and gender politics. Gebru doesn’t deny that the dispute became heated but says it ultimately proved productive, forcing attention to her negative experiences and those of other women at Google.

These new systems could also become fluent in unsavory language patterns, coursing with sexism, racism, or the tropes of ISIS propaganda. Training them required huge collections of text—BERT used 3.3 billion words and GPT-3 almost half a trillion—which engineers slurped from the web, the most readily available source with the necessary scale. But the data sets were so large that sanitizing them, or even knowing what they contained, was too daunting a task.

The paper was not intended to be a bombshell. The authors did not present new experimental results. Instead, they cited previous studies about ethical questions raised by large language models, including about the energy consumed by the tens or even thousands of powerful processors required when training such software, and the challenges of documenting potential biases in the vast data sets they were made with.

Dean became the face of Google’s displeasure with the “Stochastic Parrots” paper. He sent an email to the members of Google Research, also released publicly, saying the work “didn’t meet our bar for publication,” in part because one of its eight sections didn’t cite newer work showing that large language models could be made less energy-­hungry.

Dean also announced that progress on improving workforce diversity would now be considered in top executives’ performance reviews—perhaps quietly conceding Gebru’s assertion that leaders were not held accountable for their poor showing on this count. And he informed researchers that they would be given firmer guidance on “Google’s research goals and priorities.

To some, the drama at Google suggested that researchers on corporate payrolls should be subject to different rules than those from institutions not seeking to profit from AI. In April, some founding editors of a new journal of AI ethics published a paper calling for industry researchers to disclose who vetted their work and how, and for whistle-blowing mechanisms to be set up inside corporate labs.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

WIRED /  🏆 555. in US

Canada Latest News, Canada Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Google fixes the issue of Android Auto not displaying messagesGoogle fixes the issue of Android Auto not displaying messagesGoogle will be rolling out a fix for the problem that occurred after users updated their phones to Android 12
Read more »

Google Adsense Trailer Reveals Two Big New ‘Destiny 2’ Witch Queen ExoticsGoogle Adsense Trailer Reveals Two Big New ‘Destiny 2’ Witch Queen ExoticsApparently a new ad for Destiny 2’s Witch Queen expansion has been making the rounds, not on social media, but as a Google Adsense insert, as in, one of those little autoplaying videos that spools up when you visit a website.
Read more »

Google Cloud ramps up blockchain efforts by launching digital assets teamGoogle Cloud ramps up blockchain efforts by launching digital assets teamGoogle Cloud has established a digital assets team that will assist clients in creating, trading, storing value and launching new products on blockchain-based platforms.
Read more »

Google To Invest Up To $1 Billion In Indian Billionaire Sunil Mittal’s Bharti AirtelGoogle To Invest Up To $1 Billion In Indian Billionaire Sunil Mittal’s Bharti AirtelAlphabet Inc’s Google on Friday said it is investing up to $1 billion in Bharti Airtel, including $700 million to buy a 1.28% stake in India’s second-largest telecom operator.
Read more »

Google tablets division hiring spree fuels Pixel tablet speculation; unlikely anytime soonGoogle tablets division hiring spree fuels Pixel tablet speculation; unlikely anytime soonOne of the original Android founders is now focusing on Android tablets, and Google is also looking to expand the division.
Read more »

WhatsApp's unlimited Google Drive backups may be coming to an endWhatsApp's unlimited Google Drive backups may be coming to an endThanks to an agreement between WhatsApp and Google, you currently have unlimited storage for WhatsApp on Google Drive. This might be changing soon, though.
Read more »



Render Time: 2025-03-04 09:21:28