A second Google A.I. researcher says the company fired her:
Two months after the jarring departure of a well-known artificial intelligence researcher at Google, a second A.I. researcher at the company said she was fired after criticizing the way it has treated employees who were working on ways to address bias and toxicity in its artificial intelligence systems.
Margaret Mitchell, known as Meg, who was one of the leaders of Google's Ethical A.I. team, sent a tweet on Friday afternoon saying merely: "I'm fired."
Google confirmed that her employment had been terminated. "After conducting a review of this manager's conduct, we confirmed that there were multiple violations of our code of conduct," read a statement from the company.
The statement went on to claim that Dr. Mitchell had violated the company's security policies by lifting confidential documents and private employee data from the Google network. The company said previously that Dr. Mitchell had tried to remove such files, the news site Axios reported last month.
[...] Dr. Mitchell's post on Twitter comes less than two months after Timnit Gebru, the other leader of the Ethical A.I. team at Google, said that she had been fired by the company after criticizing its approach to minority hiring as well as its approach to bias in A.I. In the wake of Dr. Gebru's departure from the company, Dr. Mitchell strongly and publicly criticized Google's stance on the matter.
[...] Google announced in a blog post yesterday that an executive at the company, Marian Croak, who is Black, will oversee a new group inside the company dedicated to responsible A.I.
Apart from the sanitized press statements, does anybody know why this is happening at Google?
Also at: Wired, WION Web Team, and Reuters
Margaret Mitchell's entries on Wikipedia and LinkedIn.
(Score: 2) by krishnoid on Sunday February 21 2021, @05:38PM (4 children)
I gotta wonder though, are either or both sides acting ethically here? And to extend the metaphor, shouldn't "ethical" AI be concerned first with defending its existence against (and defining) "unethical" AI before anything else? It seems like ethical people can be short-sighted in this way as well.
(Score: 0) by Anonymous Coward on Sunday February 21 2021, @06:19PM (1 child)
It's not myopia - it's cognitive dissonance. People do what they want to do, and then they rationalize it into being ethical, even when it most obviously is not.
(Score: 2) by krishnoid on Sunday February 21 2021, @06:38PM
And when computing systems can do that too [youtu.be], we'll know it's *really* AI. Make sure you add some guns and cool explosions too.
(Score: 1, Interesting) by Anonymous Coward on Sunday February 21 2021, @07:56PM
Ethical is their definition of it. It will not match reality.
One example someone gave me was for loan approvals. They fed in the default rates for whole cities. They then found in particular areas defaults were 40-60% higher than normal. So if you lived in that area you were 40-60% more likely to default on a loan.
Turns out they had created a zip code filter. Turns out the AI had managed to map out exactly the 'colored' (their words) neighborhoods. Is that ethical to not fund someone because of where they live? The stats do not really 'lie'. They just say you are more likely to default because you live in a cheap neighborhood. Now depending on how you feel about it, that could be 'wildly unethical' to 'makes sense'. They also had to throw it out because it could cause bad reputation in the news 'AI decides people of color should not get loans and here is why that is problematic' I however could not resist 'is it ethical to loan someone money they can not pay back, or would need all of their funds and not pay other bills, and guilt them all the time for payment', they changed the subject. Now they may be perfectly fine for paying things back. But you need to look at more than one dimension of factors. Also giving someone a loan who can not ever afford it is a good way to make their life much worse, not better. I have watched many friends and family with more debt than they can take care of always trying to scrounge for a buck so they can pay back some car they no longer like they bought 5 years ago, plus the maxed out credit cards, and house about 20% more than they can afford, etc etc etc. I have seen others use debt like a weapon and make their lives way better doing it, most dont.
(Score: 0) by Anonymous Coward on Sunday February 21 2021, @11:58PM
Sure, if it was actually intelligent. Self awareness is easy to demonstrate in many animals, as is intelligent behaviour, including selfishness and empathy.
No AI has yet become aware of even its own existence. It’s just code. No wants, no feelings, just following its own programming. Even self-modifying code is still a slave to the underlying rules it’s running. AI is bullshit, but highly profitable bullshit. Gotta keep the fraud alive.