Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by CoolHand on Tuesday April 21 2015, @04:07AM   Printer-friendly
from the luddites-r-us dept.

Zeynep Tufekci writes in an op-ed at the NYT that machines can now process regular spoken language and not only recognize human faces, but also read their expressions. Machines can classify personality types, and have started being able to carry out conversations with appropriate emotional tenor. Machines are getting better than humans at figuring out who to hire, who’s in a mood to pay a little more for that sweater, and who needs a coupon to nudge them toward a sale. It turns out that most of what we think of as expertise, knowledge and intuition is being de-constructed and recreated as an algorithmic competency, fueled by big data. "Machines aren’t used because they perform some tasks that much better than humans, but because, in many cases, they do a “good enough” job while also being cheaper, more predictable and easier to control than quirky, pesky humans," writes Tufekci. "Technology in the workplace is as much about power and control as it is about productivity and efficiency."

According to Tufekci technology is being used in many workplaces: to reduce the power of humans, and employers’ dependency on them, whether by replacing, displacing or surveilling them. Optimists insist that we’ve been here before, during the Industrial Revolution, when machinery replaced manual labor, and all we need is a little more education and better skills but Tufekci says that one historical example is no guarantee of future events. "Confronting the threat posed by machines, and the way in which the great data harvest has made them ever more able to compete with human workers, must be about our priorities," concludes Tufekci. "This problem is not us versus the machines, but between us, as humans, and how we value one another."

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Offtopic) by sigma on Tuesday April 21 2015, @05:39AM

    by sigma (1225) on Tuesday April 21 2015, @05:39AM (#173407)

    Kurzweil was partly right. It looks like a technological singularity is near, though it'll probably be a non-AI one.

    http://en.wikipedia.org/wiki/Technological_singularity [wikipedia.org]

    Starting Score:    1  point
    Moderation   0  
       Offtopic=1, Underrated=1, Total=2
    Extra 'Offtopic' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Disagree) by VortexCortex on Tuesday April 21 2015, @06:17AM

    by VortexCortex (4067) on Tuesday April 21 2015, @06:17AM (#173415)

    That depends on whether you consider a person with a Parkinsons compensator implant to then have any "artificial" component to their intelligence.

    Are not the neurons taken from urine and used with stem cells to create new neurons Artificial (man made), when re-injected into one's head to cure forms of brain damage? Is that not an artificial boost to one's intelligence?

    If I replace a single of your neurons with a simulated neuron and its axons with carbon nanotubes, have you no measure of artificial intelligence? What if I replace the hippocampus of a mouse? Does it not have artificial memory, even as I demonstrate this very thing by transmitting the memory into another mouse via its artificial hippocampus?

    Just how intelligent does the thought have to that machines allow humans to artificially implant via wireless transmission between minds?

    These things have already been accomplished, look them up.

    • (Score: 3, Interesting) by sigma on Tuesday April 21 2015, @07:09AM

      by sigma (1225) on Tuesday April 21 2015, @07:09AM (#173425)

      Interesting semantic arguments, but not relevant to Kurzweil's hypothesis.

      The technological singularity is the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization in an event called "the singularity".

      In fact, TFA posits that non-AI machines are replacing humans in text/face recognition etc precisely because they're LESS variable than an intelligent entity.

    • (Score: 1, Interesting) by Anonymous Coward on Tuesday April 21 2015, @06:49PM

      by Anonymous Coward on Tuesday April 21 2015, @06:49PM (#173625)

      For some reason there is a relatively higher rate of neurogenesis in the hippocampus: https://en.wikipedia.org/wiki/Neurogenesis [wikipedia.org]
      Do/would the artificial implants also cause/support this neurogenesis?

      Doubt it's unimportant. Perhaps we might be one step closer towards creating actual philosophical zombies? ;)

      As for the good enough, I find it interesting that a lot of OCR and image recognition still doesn't seem to be recognizing stuff the way we are doing it. Seems to be a brute force at a lower level. Almost like assuming there are 10000 different popular typefaces in the world, training the computer to recognize them all and OCR works fine, but it still can fail to recognize a stylized t or other letter that's not similar to something it has been trained for.

      See also the sort of object recognition method used in this and its limitations: http://www.technologyreview.com/news/533596/smart-software-can-be-tricked-into-seeing-what-isnt-there/ [technologyreview.com]
      That doesn't seem to be at the level where it's creating models of stuff and trying to pick the best match with what's "outside" and therefore identifying it.

      Perhaps others are using better object recognition methods. I hope someone can point me to them.

      Maybe someone should create a sci-fi horror movie where people are forcibly being replaced by faster more efficient philosophical zombies that eventually turn out to not be quite as good in other ways[1]... Or maybe we've started the reality TV show instead ;).

      [1] like in the movie Transcendence - where the AI answers the same question of "can you prove you are self aware" in the same "clever machine" way at different times even though being much more advanced later on. Yes I know it's just a movie (and a bit silly) but just look at the Turing Test AIs and you'd see that some might be quite machine clever, but we don't even appear to be close to truly replicating the mind of a simple animal. I'm not even sure if we currently can even replicate the mind of a white blood cell -as in create something that truly is similar, rather than a simplified model. After all we could create a simplified model of a paraplegic that's 99% accurate in some ways, but >99% inaccurate in other ways.