Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday November 05 2015, @12:17PM   Printer-friendly
from the debugging? dept.

In a kind of counter intuitive argument in this article in The Wall Street Journal , Uber drivers may now have to battle with the fact that no human is actually telling them what to do. Most of the tasks are now being automated. The study by Researchers at the Data and Society research institute at New York University point out that Uber uses software to exert similar control over workers that a human manager would.

The world looks more and more like the Manna short story, where every aspect of our employee life is used to classify our performance. Another interesting discussion point: Is the middle manager role disappearing?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Jiro on Thursday November 05 2015, @03:35PM

    by Jiro (3176) on Thursday November 05 2015, @03:35PM (#258896)

    I think the idea is that the algorithm doesn't include personal quirks and prejudices. No algorithm of this type is going to have a line in it "reduce the score if the employee is Joe", even if one of the bosses hates Joe and cuts him much less slack than anyone else. Only human beings do that.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Insightful) by zoefff on Thursday November 05 2015, @03:57PM

    by zoefff (5470) on Thursday November 05 2015, @03:57PM (#258905)

    But it is also the other way around. Humans can forgive your mistakes of plain forget them. Computers tend to not to forget and therefore, in principle, your mistakes could haunt you for a longer period than wanted

  • (Score: 3, Insightful) by Thexalon on Thursday November 05 2015, @04:40PM

    by Thexalon (636) on Thursday November 05 2015, @04:40PM (#258928)

    No algorithm of this type is going to have a line in it "reduce the score if the employee is Joe"

    Why wouldn't it? If, for example, a programmer on that algorithm project found out that his girlfriend cheated on him with Joe, he might well put something like that in place. And that modified algorithm would pass through QA no trouble, because the QA analyst wouldn't be told about this and wouldn't catch it on a regression test unless he happened to test using Joe's account specifically.

    Algorithms don't strip people of power. They shift who has the power to the people who control the algorithm.

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
  • (Score: 2) by sjames on Thursday November 05 2015, @09:06PM

    by sjames (2882) on Thursday November 05 2015, @09:06PM (#259093) Journal

    Unfortunately, it will have no subroutine to compute a mercy score either. It will ruthlessly carry out the policy set by people who never actually relate to the employees as human beings.

    It's easier to shit on someone you've never met, and especially one you will never possibly run in to outside of work.