Stories
Slash Boxes
Comments

SoylentNews is people

Community Reviews
posted by on Monday May 08 2017, @06:15AM   Printer-friendly
from the beep-beep-i-am-a-gadget dept.

I read a couple of good books recently, and wanted to share them and do some writing to collect my thoughts on a subject that is currently of news-worthy relevance and of particular interest to "Soylentils". Enjoy, and I look forward to the discussion!


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by takyon on Monday May 08 2017, @07:19AM (2 children)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Monday May 08 2017, @07:19AM (#506227) Journal

    Here's a little interview/debate from 3 years ago [pbs.org] in which he (on a whim) advocates incentivizing against uploading to Facebook and whines about the devaluation of human mental labor in the face of big data/machine learning while offering no solution:

    I mean, you have to — people have to be valued for what they actually do. The economy has to be honest. And so what I am concerned about is that by getting everybody to input all their productivity for free to these Silicon Valley companies, including the one that funds my lab, by the way, so I’m a beneficiary of what I’m criticizing — but in order to pretend that all this stuff, you know, it comes in for free, and what we give people in exchange is access to services, we’re taking them out of the economic cycle.

    We’re putting them into an informal economy, which is an unbalanced way to grow a society. And that’s also a road to ruin. I’m not asking for artificial make-work projects. I’m asking for honesty, where we acknowledge when people generate value, and make them first-class economic citizens.

    And then I think that all of these amazing schemes of automation, the self-driving cars, the 3-D printers, these will lead to a world of happy, meaningful lives, as well as great economic growth. You know, that’s the ticket, is honesty.

    Since 2014 when this segment aired, we've seen even greater reliance on tech sweatshop "mechanical turks" feeding human insight into machine learning systems. Like the army that is currently combing YouTube videos to determine which ones are potentially "inappropriate" for advertising brands and labeling the specific reasoning (profanity, violence, bullying, racism, ISISm, etc.). These classifications will eventually train an AI that can do the obviously gargantuan task much cheaper and more thoroughly than tens of thousands of humans could.

    Jaron talks about Google Translate devaluing translators. Google Translate had its problems back in 2014. Now it runs on machine learning and TPUs [nytimes.com]:

    The new incarnation, to the pleasant surprise of Google’s own engineers, had been completed in only nine months. The A.I. system had demonstrated overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime.

    It won't be long before it's too late to value menial data tasks, as most of them will have been completed. It will be too late to create the Mechanical Turks Union.

    Lanier is in a special position to add to the the outcry against this fantasy--there is more to the essence of a person than mere computational processes that can be modeled on a computer, no matter how fast the processor or large the storage capacity.

    If the essence of a person is ideological dreck, maybe the computers are getting a good deal.

    But seriously though, machine learning is a shortcut around complete emulation or creation of a sentient entity. It accomplishes human-like performance on useful tasks without the need for a human or artificial human. Yet strong AI is still on the menu and will probably be created in secret. It is within the realm of possibility for the tech giants to create neuromorphic chips with billions of neuron-like components. They will not need to match human-scale intelligence or the human's 20 W power envelope to create a potentially useful system. And it may already exist [nextbigfuture.com] with the usual suspects like the military and NSA reaping the benefits.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Moderation   +2  
       Interesting=1, Informative=1, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2) by AthanasiusKircher on Monday May 08 2017, @03:17PM

    by AthanasiusKircher (5291) on Monday May 08 2017, @03:17PM (#506383) Journal

    I agree with a lot of this; and thanks for the thoughts and links. After reading the article on recent advances in Google Translate, I'm interested in seeing how it does again. Last year I was asked by a friend to give some comments on a book draft in a language I'm not fluent in (though I know enough to read pretty well), and Google Translate still seemed very rough. It helped me get the gist in some passages quicker than a traditional approach with a dictionary, but I'd never mistake it for the work of a human translator. And it still made a lot of basic errors and seemed "confused" quite often by minor grammatical issues like word order, simple idioms, or obvious things like proper names. (Obviously it should go without saying that Google Translate even a few years ago was leagues ahead of what was available before Google Translate.)

    I haven't tried putting long bits of text into it since then, but perhaps I should try again...

  • (Score: 2) by pnkwarhall on Monday May 08 2017, @04:09PM

    by pnkwarhall (4558) on Monday May 08 2017, @04:09PM (#506409)

    The lack of solutions to any of the specific problems Lanier brought up was the main problem I had with the book. However, I wonder if this was intentional--the preface to "You Are Not A Gadget" was IMHO cleverly constructed to challenge the reader to read with a deliberately conscious and reflective approach. Lanier ends the book by saying that the only solutions he has are ones specific to his experience and tools-at-hand (i.e. VR-based communication tools); but I think the implicit message is that the essence that makes humans special is that they can imagine, develop, and implement the missing solutions (then reflect on the outcome and start all over again!). As a "gadget" is something that un-thinkingly carries out orders/tasks--like a person who reads and implements a solution without much thought about the whys, hows, and consequences--it's our's to understand the problems and find the solutions.

    Your example of mechanical turks supplying their efforts to making said efforts able to be accomplished by algorithm (w/o their input) contains the ironic assumption that, at a certain point, the algorithm will be able to duplicate the "work" of the mechanical turk. It's ironic because at it's most basic this is a Turing Test, and Turing Tests say more about us than they do about the algorithm. One of Lanier's main points is that we are, to put it simply, allowing the data needs/assumptions of the algorithm to mold our own perceptions of who we are and how we function.

    If the essence of a person is ideological dreck[...]

    One man's trash is another man's treasure.

    --
    Lift Yr Skinny Fists Like Antennas to Heaven