Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Thursday March 02, @06:49AM   Printer-friendly
from the I'm-from-the-AI-department-and-I'm-here-to-help dept.

As layoffs ravage the tech industry, algorithms once used to help hire could now be deciding who gets cut:

Days after mass layoffs trimmed 12,000 jobs at Google, hundreds of former employees flocked to an online chatroom to commiserate about the seemingly erratic way they had suddenly been made redundant.

They swapped theories on how management had decided who got cut. Could a "mindless algorithm carefully designed not to violate any laws" have chosen who got the ax, one person wondered in a Discord post The Washington Post could not independently verify.

Google says there was "no algorithm involved" in its job-cut decisions. But former employees are not wrong to wonder, as a fleet of artificial intelligence tools become ingrained in office life. Human resources managers use machine learning software to analyze millions of employment-related data points, churning out recommendations of whom to interview, hire, promote or help retain.

[...] A January survey of 300 human resources leaders at U.S. companies revealed that 98 percent of them say software and algorithms will help them make layoff decisions this year. And as companies lay off large swaths of people — with cuts creeping into the five digits — it's hard for humans to execute alone.

[...] These same tools can help in layoffs. "They suddenly are just being used differently," [Harvard Business School professor Joseph] Fuller added, "because that's the place where people have ... a real ... inventory of skills."

Originally spotted on The Eponymous Pickle.

Related:


Original Submission

Related Stories

Amazon Will Reportedly Lay Off 10,000 Employees 18 comments

It's a bad month for layoffs in tech

The big tech layoffs are continuing apace, and it seems nobody is safe. Following this month's massive staff cuts at Twitter and Meta, the New York Times reports that Amazon is now planning to let go of approximately 10,000 employees. Happy holidays, I guess.

Amazon's upcoming job cuts will reportedly impact its corporate employees, specifically its retail division, human resources, and the team working on the company's devices (which includes voice assistant Alexa).

Considering that Amazon employs over 1.5 million people across the globe, 10,000 workers laid off may not seem like a significant percentage from the company's perspective. It amounts to about 0.7 percent of Amazon's employees, which is a considerably smaller relative reduction than Twitter's Elon Musk-induced layoffs that cut its workforce by around 50 percent.

A reduction in force, or perhaps they're making room for picking up some of the high performing Twitter talent who were let go? [hubie]


Original Submission

Google Employees Brace for a Cost-Cutting Drive as Anxiety Mounts 3 comments

Google workers in Switzerland sent a letter this month to the company's vice president of human resources, outlining their worries that a new employee evaluation system could be used to cull the work force:

"The number and spread of reports that reached us indicates that at least some managers were aggressively pressured to apply a quota" on a process that could lead to employees getting negative ratings and potentially losing their jobs, five workers and employee representatives wrote in the letter, which was obtained by The New York Times.

The letter signaled how some Google employees are increasingly interpreting recent management decisions as warnings that the company may be angling to conduct broader layoffs. From the impending closure of a small office and the cancellation of a content-moderation project to various efforts to ease budgets during 2023 planning meetings, the Silicon Valley behemoth has become a tinderbox of anxiety, according to interviews with 14 current and former employees, who spoke on the condition of anonymity for fear of retribution.

[...] The worries have grown as Google's tech industry peers have handed out pink slips amid a souring global economy. Last month, Meta, the owner of Facebook and Instagram, purged its ranks by 11,000, or about 13 percent of its work force. Amazon also began laying off about 10,000 people in corporate and technology jobs, or about 3 percent of its corporate employees.

Even Google, which is on track to make tens of billions of dollars in profits this year, has had to come to terms with a slowdown. In October, as the digital advertising market slumped, Google's parent company, Alphabet, reported that profit dropped 27 percent in the third quarter from a year earlier, to $13.9 billion.

Related: Amazon Will Reportedly Lay Off 10,000 Employees


Original Submission

Open Source Teams at Google Hit Hard by Layoffs: Was It the Algorithm? 11 comments

During the pandemic, Big Tech was booming and hiring new employees as fast as they could. With all that hubbub behind us, and an uncertain economic outlook, those Giants of the Internet are cautiously trimming some of that fat in preparation for leaner times.

That, at least, is the argument for the recent wave of lay-offs at Facebook (Meta), Twitter, Amazon, Stripe, SalesForce, Lyft, DoorDash and Carvana. It seems, though, that the recent layoffs at Google might have been a little different.

Instead of culling the recent hires, the trusted hands at open source teams, and those teams themselves, are being hit especially hard argues an opinion piece at El Reg. Chris DiBona, founder of Google's Open Source Program Office, Jeremy Allison, co-creator of Samba and Google engineer, Cat Allman, former Program Manager for Developer EcoSystems, and Dave Lester, Head of Google's open source security initiatives, are the main names being mentioned.

El Reg's observation might be a coincidence, however; and the way the layoffs are being executed kinda points to that. No exit interviews, but just people's access badges disabled, and firings by e-mail: at least one engineer got the message in the middle of his production shift. Which gave rise to an interesting speculation by former Google engineer Mike Knell:

Best theory I have is that an outside company was hired and given a "clean room" export from the HR systems to work with.

Stripped of identifying information and any demographic data that could incur a *direct* disciminatory bias in the results. They were then told to write code to determine which rows to cut from the dataset based on the output of some weighted formula designed to determine the "fireability" of that employee while maximising the savings achieved by the exercise. They then took the output of that algorithm, stack ranked the results (because Google just LOVES to stack rank things, especially people) and returned the top 12,000 employee IDs.


Original Submission

Yahoo to Lay Off 20% of Staff by Year-End, Beginning This Week 20 comments

In the latest lay off round hit to tech, Yahoo has announced they will be releasing around 1,600 workers, including half their Business unit, with 1,000 of the cuts coming by the end of the week:

The layoffs are part of a broader effort by the company to streamline operations in Yahoo's advertising unit. The Yahoo for Business segment's strategy had "struggled to live up to our high standards across the entire stack," according to a Yahoo spokesperson.

"Given the new focus of the new Yahoo Advertising group, we will reduce the workforce of the former Yahoo for Business division by nearly 50% by the end of 2023," a Yahoo spokesperson told CNBC.

Yahoo said the company would shift efforts to its 30-year partnership with Taboola, a digital advertising company, to satisfy ad services.

Those losing their jobs will be provided severance packages.

Related:


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Funny) by Anonymous Coward on Thursday March 02, @08:57AM

    by Anonymous Coward on Thursday March 02, @08:57AM (#1294028)

    catch phrase of 2023

  • (Score: 3, Funny) by PiMuNu on Thursday March 02, @09:50AM

    by PiMuNu (3823) on Thursday March 02, @09:50AM (#1294034)

    > Human resources managers use machine learning software

    A recipe for success?

  • (Score: 5, Insightful) by pTamok on Thursday March 02, @09:59AM (13 children)

    by pTamok (3042) on Thursday March 02, @09:59AM (#1294037)

    The firing algorithms are only as good as the data used to train them.

    For years, managers have been perfecting the art of rigging performance appraisals so that favoured people are retained, and unfavoured ones are 'let go'. Dressing up the garbage data in an 'AI' algorithm isn't going to make the process fairer.

    Even using a so-called 'objective' AI to do performance appraisals will not work, because it can only work on the (manipulated) data it is given, and work in accordance with goals defined outside the system. If the goals are incorrect, and the data is garbage, you won't get objective evaluations.

    Yes, I sound cynical. Realism often does.

    • (Score: 5, Insightful) by JoeMerchant on Thursday March 02, @11:04AM (12 children)

      by JoeMerchant (3937) on Thursday March 02, @11:04AM (#1294047)

      The best CEOs, as measured by a small number of quarterly profit reports, are psychopaths: without concern for the human costs of their decisions.

      Seems like training an even more psychopathic AI would be simple.

      There are many examples of the Miligram experiment results being extended to algorithms as the source of authority, the Dutch welfare administration scandal keeps coming to mind as a clear example: https://www.google.com/amp/s/www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/amp/ [google.com]

      --
      Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
      • (Score: 2) by JoeMerchant on Thursday March 02, @11:07AM

        by JoeMerchant (3937) on Thursday March 02, @11:07AM (#1294048)

        Milgram, of course, AI spell check can be stubborn...

        https://en.m.wikipedia.org/wiki/Milgram_experiment [wikipedia.org]

        --
        Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
      • (Score: 1) by pTamok on Thursday March 02, @11:14AM (10 children)

        by pTamok (3042) on Thursday March 02, @11:14AM (#1294050)

        The Australian Robodebt scheme [wikipedia.org] could be another example.

        Automating decision making: "The computer says 'No'."; is ripe for making terrible decisions.

        • (Score: 2) by JoeMerchant on Thursday March 02, @01:07PM (9 children)

          by JoeMerchant (3937) on Thursday March 02, @01:07PM (#1294059)

          The willingness of people to defer to robot decision makers as "authority" when their human bosses and procedures tell them to use their own judgement beyond the algorithm is terrifying.

          --
          Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
          • (Score: 2) by RS3 on Thursday March 02, @11:43PM (8 children)

            by RS3 (6367) on Thursday March 02, @11:43PM (#1294160)

            You remind me of an MCAS (Boeing 747MAX's population control software) discussion on greensite. I posted something to the effect that I think a human should always be able to override / disable automation like autopilot, MCAS, etc. Moron argued how machines are so much better than humans. I don't suffer fools well. Obviously machines can be much faster and more accurate than humans, but things do break. Is it morally and ethically okay for machines to kill humans due to malfunction, or should humans be allowed to disable the machine and be given an opportunity to save their own butts? Answer seems very obvious to me ("We hold these truths to be self-evident...") but it was greensite, so...

            • (Score: 3, Interesting) by JoeMerchant on Friday March 03, @01:18AM (5 children)

              by JoeMerchant (3937) on Friday March 03, @01:18AM (#1294173)

              Thanks for reminding me why I left.

              --
              Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
              • (Score: 2) by RS3 on Friday March 03, @02:18AM (4 children)

                by RS3 (6367) on Friday March 03, @02:18AM (#1294182)

                Hey, any time. I got yer back.

                There are actually some good people there. I dare utter it, but an idea I had years ago is to have a filtering system that you can configure to filter out certain people's comments. You know, collapse them like ones beneath current reading threshold, but you can still click and read if you want.

                • (Score: 1, Insightful) by Anonymous Coward on Friday March 03, @10:27AM (3 children)

                  by Anonymous Coward on Friday March 03, @10:27AM (#1294244)

                  I'm just gonna leave a link.

                  It just confirms my beliefs about AI anyway.

                  https://www.searchenginejournal.com/what-is-chatgpt/473664/ [searchenginejournal.com]

                  My feelz... is it intelligent? No. Intelligence is unique to sentient life forms, likely more, but is a biological phenomena people are trying to understand and codify. Is it useful. Yes, very. Its a very powerful tool for organizing information, but like with any tool, its only as good as the entity operating it.

                  We may mimic intelligence with glorified lookup tables and flow charts, but genuine judgement and creativity require some sort of secret sauce. I have no idea what it is. My suspicion it's basis is in chaos theory. Maybe I will live long enough to see if artificial intelligence will foment the same hypothesis on creation and theology various humans deduce based only on their own thought processes and observations .

                  Belief systems are an enigma for me. Religion vs. superstition. Tyranny of choice. Everyone holds their own belief system as true, yet no one offers demonstrable proof of anything. Some say faith, but isn't that the downfall of gambling?

                  This is off topic as hell, but I just have to throw this into the fray to see if anyone will toss something else in for consideration.

                  • (Score: 4, Insightful) by JoeMerchant on Friday March 03, @11:11AM

                    by JoeMerchant (3937) on Friday March 03, @11:11AM (#1294247)

                    >My feelz... is it intelligent?

                    Depends on your definition... Could it pass a Turing test? I would say, properly dressed up, yes, the best chatbots today could fool over 60% of the population today into thinking they are real people, right down to the incorrect answers they give.

                    >genuine judgement

                    Again, properly dressed, I think AI could exhibit "better judgement" than 60% or more of the meatbags population for questions put to it. I haven't seen a lot of "I can't answer correctly without more information" pushback from AI yet, but that shouldn't be hard to code if it is evaluating specific types of questions in which minimum acceptable input information has been defined. "Beyond a reasonable doubt" would be an interesting threshold to test.

                    >creativity

                    Again, what is your threshold? More creative than 60% of people? Easily already.

                    >Everyone holds their own belief system as true,

                    I strongly disagree. Many people hold a core belief of "nobody really knows".

                    >yet no one offers demonstrable proof of anything

                    Again, belief in science is the opposite of that, although I am frequently disappointed in people who base their decisions on "the best available science" without questioning the probability of it being incorrect and how that should impact their decisions.

                    >Tyranny of choice.

                    If you choose not to decide, you still have made a choice!

                    >My feelz... is it intelligent?

                    I feel AI is a valuable tool today, more valuable than many human alternatives, but not a replacement for the best humans in most circumstances. I understand that a human beat AlphaGo in a 14/15 game tournament, I wonder how long it will take for AlphaGo to learn how to learn from it's vulnerability and correct it without introducing new weaknesses? When that happens, and is generalizable, I would say AI has taken another significant step forward.

                    --
                    Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
                  • (Score: 4, Insightful) by RS3 on Friday March 03, @06:34PM

                    by RS3 (6367) on Friday March 03, @06:34PM (#1294330)

                    I disagree: this is very on-topic (and I'm not big on the whole "off-topic" thing anyway).

                    In general I observe lines are blurring. Who's to define "intelligence" anyway? As JM mentioned, "Turing Test", but maybe that's a bit long in the tooth? It's a pretty complex thing to define, and there are so many degrees / levels of intelligence.

                    Neuroscience / neurophysiology / human behavior as it relates to brain science has always been a strong interest of mine (no expert though!)

                    Some basics; things like:

                    1. "right-brain" and "left-brain" ("chaos" vs. "control", creativity vs. practicality)
                    2. brain is a pattern-matching machine
                    3. Phineas Gage [smithsonianmag.com]
                    4. neural-net concepts like data interconnections / relations, self-modifying code (at least the interconnects)...

                    "Faith" is a pretty broad topic, again, difficult to define, and everyone kind of has their own definition.

                    To me, gambling is just gambling. There may be some skill, statistics / probability involved, but I don't personally relate it to "faith", although some people do.

                    Ever since Fleischmann–Pons claims [wikipedia.org] I've lost "faith" (!!) in what is "fact", "truth", "knowledge", etc. I want to say I know what I know, have done, have observed, test results, etc...

                    But...

                    As a child I had a much older relative who was a "magician". I loved it, but was equally frustrated that reality did NOT match up with what I saw! So at a very young age I became somewhat of a skeptic, which IMHO is necessary for learning: keeping an open mind. Don't ever be so darned sure of what you're pretty sure you think you know.

                    "AI" may not be what a proper definition of intelligence is, but we (humanity) are in a very dynamic learning process and both are evolving.

                  • (Score: 2) by turgid on Saturday March 04, @01:49PM

                    by turgid (4318) Subscriber Badge on Saturday March 04, @01:49PM (#1294460) Journal

                    We may mimic intelligence with glorified lookup tables and flow charts, but genuine judgement and creativity require some sort of secret sauce. I have no idea what it is.

                    The bigger the pool of data you have, in other words the wider your level of experience, the more likely you are to make a choice someone else hasn't thought of.

                    Conversely, at the Dunning-Kruger end of the scale, you are probably more likely to make a poor decision that someone or something with better data would not have made.

                    There's one thing missing from this, though, and that's motivation. Humans are motivated by all sorts of feedback from their own minds, bodies and environment. Humans have senses. We feel heat, cold, hunger, pain, thirst and all sorts. These influence our decisions. I try to avoid shopping when feeling hungry or thirsty for obvious reasons.

                    What motivations do these Artificial Intelligences have?

            • (Score: 0) by Anonymous Coward on Saturday March 04, @01:51PM (1 child)

              by Anonymous Coward on Saturday March 04, @01:51PM (#1294462)

              Human override is always essential. I've seen some horrific things in "safety critical" engineering. No management process can fix human nature.

              • (Score: 2) by RS3 on Monday March 06, @07:13PM

                by RS3 (6367) on Monday March 06, @07:13PM (#1294817)

                Sadly it's one of life's big conundrums. I don't know if there's a big-picture / long-term fix (to the problems of imperfect designs, inadequate testing, pressure to ship things before they're ready, general negligence / rushing / slipshod / careless design / construction...)

                In the interim, I think it's best to always give the human the ability to override. BUT, make _sure_ the humans understand the implications of doing the override.

                It's sad and quite surprising (to me anyway) how many horrific airplane accidents have occurred because humans were relying on autopilot, but it was accidentally turned off, but pilots weren't aware, or it was in some semi-automatic mode- controlling some things but not others, and again, pilots weren't aware.

                It's part of my worry- that automation breeds complacency, or stated in a less blaming way: humans passively become less situationally aware.

                Full disclosure- my opinions on this topic have been reinforced and emboldened by watching "Mayday!" and generally studying the details of many air crashes. To be sure, FAA, NTSB, manufacturers, etc., have learned and made changes, although too often the changes come much later, if at all, and many of the problems could have been anticipated if the grumpy fault-finders were rewarded rather than fired.

  • (Score: 5, Funny) by Opportunist on Thursday March 02, @10:06AM (1 child)

    by Opportunist (5545) on Thursday March 02, @10:06AM (#1294039)

    If given unfettered access to the whole staff, want to be the first to go is management and HR?

    • (Score: 4, Touché) by Freeman on Thursday March 02, @02:47PM

      by Freeman (732) Subscriber Badge on Thursday March 02, @02:47PM (#1294073) Journal

      Those were accidental automated firings and culled from the data set.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
  • (Score: -1) by billbellum on Thursday March 02, @10:06PM

    by billbellum (18539) on Thursday March 02, @10:06PM (#1294150)

    Headline edited. Incel's dream, just after the sexbot. And could it really do any worse than the random mess that is humanity?

  • (Score: 4, Insightful) by jb on Friday March 03, @04:54AM (2 children)

    by jb (338) on Friday March 03, @04:54AM (#1294208)

    As far as I can tell the "intelligence" of the average HR department has been "artificial" since day one.

    • (Score: 4, Insightful) by RS3 on Friday March 03, @06:44PM (1 child)

      by RS3 (6367) on Friday March 03, @06:44PM (#1294333)

      Very sore subject for me, for sure. Not to defend them, but at some point I discovered they don't typically make big salaries, and maybe have some grudge, maybe subconscious, against the people they're placing. You know, like they hold all the goodies and you have to dance the way they want and maybe you'll get the treat (and maybe they'll shoot at your feet while yelling "dance boy!") (I seem to remember that from some old western movie...)

      Also, again not defending the HR mess, but they're rarely truly expert in the details of the careers and people they're trying to match up, and as I mention above, the brain is among many things a pattern-matching machine, and they just want to make matches and then their brains can relax in the dopamine bath.

      • (Score: 2) by Freeman on Monday March 06, @04:58PM

        by Freeman (732) Subscriber Badge on Monday March 06, @04:58PM (#1294775) Journal

        I would be surprised, if that was only in one Western (as opposed to multiple).

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(1)