Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Sunday April 29 2018, @01:35PM   Printer-friendly
from the robots-processed-this-story dept.

They probably weren’t inspired by [Jeff Dunham’s] jalapeno on a stick, but Intel have created the Movidius neural compute stick which is in effect a neural network in a USB stick form factor. They don’t rely on the cloud, they require no fan, and you can get one for well under $100.

SiliconAngle has more:

What distinguishes AI systems on a chip from traditional mobile processors is that they come with specialized neural-network processors, such as graphics processing units or GPUs, tensor processing units or TPUs, and field programming gate arrays or FPGAs. These AI-optimized chips offload neural-network processing from the device’s central processing unit chip, enabling more local autonomous AI processing

Are we about to see another computing revolution and what will the technological and sociopolitical landscape look like after this?


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Interesting) by Virindi on Sunday April 29 2018, @02:06PM (5 children)

    by Virindi (3484) on Sunday April 29 2018, @02:06PM (#673395)

    "AI" is currently a hype bubble, just like "blockchain".

    While neural networks are genuinely useful for a lot of search and pattern matching tasks, people who don't really understand are hyping up the subject as though it can solve every problem and change civilization. I do not believe the end result will be so stark a change.

    One area that "AI" is being pushed hard to operate in is systems that guess what the user wants to do, such as Netflix shows, "suggested items" for sale, Google searches, etc. Many act like this is the most promising application. However, the results of these systems are all horrible. Netflix cannot magically guess what I want to watch (the suggestions are terrible, the better results are from merely listing categories that are most frequently viewed). Amazon suggests stupid products that I would never buy. Google searches still return hundreds of pages of SEO except when they can filter them by rating higher "pages that other people have clicked on". Google autocomplete almost never gives a useful search.

    Natural language processing? It is a little better now but not much. Still everytime someone uses an Echo, I witness them struggle with the device misunderstanding, or merely not being able to deal with the request. To interact, you have to use a fixed vocabulary and grammatical structure...just like the old days, but now it knows more words and more ways of phrasing each request. A natural result of a huge development effort to input such things.

    Now don't get me wrong, neural networks have certainly demonstrated they can improve some things, and will continue to. For instance, from the example above, speech recognition. Or, object recognition. Neural networks are good at finding patterns...however, people act like they will be able to find a pattern where no pattern exists. This is hype.

    Also, they are mostly applicable only to recognition or filtering tasks where a large amount of data can be gathered for training. When you have a small amount of data, it won't work too well either.

    Basically "AI" is not some earth shattering, society changing thing. It is in reality a set of incremental improvements to matching and filtering tasks. But right now the hype level is off the charts, far out of proportion with the actual results. The big tech companies are responsible for this; they want to collect as much information as possible on you. Using it to train any kind of neural network they can think of is a good use for it.

    • (Score: 2) by takyon on Sunday April 29 2018, @02:24PM (2 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday April 29 2018, @02:24PM (#673398) Journal

      We're getting to a point where all new smartphone SoCs will include dedicated machine learning/neural network hardware:

      Apple Wants to Add Machine Learning Chips to Smartphone SoCs [soylentnews.org]

      The AI hardware doesn't necessarily need a plethora of third-party killer apps to become useful. For example, Google's Pixel 2 smartphone includes the "Pixel Visual Core" [engadget.com] to assist the camera. The amount of people with this hardware will rise even before third-party developers do anything useful with it [techcrunch.com].

      Netflix, Amazon, and Google have all been returning irrelevant results/recommendations for many years, coloring your perception of what's possible. That's not to say that Netflix recommendations will become perfect one day, but they are probably not crunching your view history to the extent that they could be. And if storage (personal data + habits) and ML processing power increase by an order of magnitude, that could allow another percent or two of "correctness" to be squeezed out of these models. This will tide people over until brain-computer interfaces gain traction.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by Virindi on Sunday April 29 2018, @02:30PM (1 child)

        by Virindi (3484) on Sunday April 29 2018, @02:30PM (#673400)

        A percent or two? Sure, probably more than that. But we have a massive hype machine grinding away telling us that the magic AI can anticipate our every desire before we desire it. This is marketing. Current systems are nowhere near as good as they act like they are, and the performance of future systems is hypothetical and subject to diminishing returns as it gets harder to get more training data, you approach the best possible confidence based on signal/noise ratio, etc.

    • (Score: 2) by VLM on Sunday April 29 2018, @03:12PM (1 child)

      by VLM (445) on Sunday April 29 2018, @03:12PM (#673410)

      I'm old enough to remember the original "AI Winter" and when the present bubble bursts the second AI winter will likely be kinda icky.

      • (Score: 2) by takyon on Sunday April 29 2018, @03:30PM

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday April 29 2018, @03:30PM (#673415) Journal

        Google's TPUs speed up the company's translation, search, etc. while reducing costs and power consumption. It's also behind many image/pattern recognition techniques and driverless cars. Even people not financially involved can use their GPUs to create #deepfake porn. Machine learning is not going away, no matter what you may think. Any bubble bursting will be a temporary speed bump and a great time for the tech giants to snap up companies and talent for cheap. Yes, they would welcome the burst with faces of glee. They worry about the advertising bubble bursting, not AI.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 5, Interesting) by VLM on Sunday April 29 2018, @03:19PM (5 children)

    by VLM (445) on Sunday April 29 2018, @03:19PM (#673412)

    The purpose of modern AI is make-work.

    Why use a IBM model M keyboard to enter data accurately in seconds when its far trendier to inaccurately enter the data on a mobile touch keyboard with many typos, or argue with a voice system like Alexa for minutes trying to get her interpretation correct?

    This is the solution to the problem of economic uselessness due to technological progression. One is the hyper-merger syndrome where corporations merge until they spend all effort fighting other internal departments. On the smaller scale the way to soak up excess labor will be to do everything via the slowest and least reliable user interface possible. You won't type something in Excel in a masculine commanding sense to order the computer to generate a meaningless number the way you want; you'll feminine cajole Siri for hours trying to talk her into giving you a meaningless subtotal or average such that the "star" workplace employees would be more productive because they do all math using pencil and paper far faster and more accurately than people who rely on AI.

    The definition of a commercially viable user interface is it only becomes less usable over time, right? Almost by definition something like a unix command line or emacs is inherently not commercially viable BECAUSE it works. Commercially viable doesn't mean it works; it means the secretary can spend three hours in the afternoon trying to install comet cursor, right?

    • (Score: 0) by Anonymous Coward on Monday April 30 2018, @12:32AM

      by Anonymous Coward on Monday April 30 2018, @12:32AM (#673529)

      You won't type something in Excel in a masculine commanding sense to order the computer to generate a meaningless number the way you want; you'll feminine cajole Siri for hours trying to talk her into giving you a meaningless subtotal or average

      Bwahahaha! You owe me a new keyboard!

    • (Score: 1) by khallow on Monday April 30 2018, @12:50AM (3 children)

      by khallow (3766) Subscriber Badge on Monday April 30 2018, @12:50AM (#673543) Journal
      Sorry, I don't buy that.

      The purpose of modern AI is make-work.

      Why use a IBM model M keyboard to enter data accurately in seconds when its far trendier to inaccurately enter the data on a mobile touch keyboard with many typos, or argue with a voice system like Alexa for minutes trying to get her interpretation correct?

      Because a keyboard is not always the best tool for the task in question. For example, suppose you're a waiter in a restaurant with a computerized point of sale system. In the moderately older days, that meant in order to ring in sales, you needed to retreat to a register in order to ring in sales. That meant more time away from your guests and more time with their credit cards.

      Now, you can enter the order vocally in a PDA, swipe a credit card with said PDA, receive notifications when food and beverages are ready, and perhaps even print a physical receipt on a nearby wall-mounted thermal printer. So for example, you don't need to be proficient with a keyboard in order to be a good waiter, credit cards and any other important financial instruments never leave the table, and you can spend more time doing the primary waiter tasks of serving your guests.

      While one can see plenty of applications today where these tools are used poorly, that doesn't mean that they are universally less adequate than existing data entry and communication for every purpose. In a rational world, we would develop these capabilities anyway just because they would be sufficiently useful to warrant the expense.

      So my first point is that the existence of modern, somewhat gimpy AI does have use in a lot of places where existing approaches are weak. Thus, it can have a purpose of improving our productivity rather than the reverse.

      A related point is that while these programs are weak today, they were much weaker in the past. There is no reason to expect that AI products will retain their current level of dysfunction when they have already steadily improved over the past several decades. Moving on:

      This is the solution to the problem of economic uselessness due to technological progression.

      What economic uselessness? My view is that we're in the opposite situation with billions of people more gainfully employed due to technology than they were 50 years ago (which is where things started to change in a big way). I think a lot of the "economic uselessness" criticism is misdirected because we're seeing people doing tasks that weren't worth doing in the past. Technology has allowed those tasks to become sufficiently valuable and accessible cost/labor-wise to do now.

      Seriously, we have centuries of technological progression already. It has served to instead make our labor more valuable - though yes, we do have to adapt when old jobs are obsoleted. We don't even have the start of a reversal of this trend today.

      The definition of a commercially viable user interface is it only becomes less usable over time, right?

      Depends on what it does. I think the user interfaces that are getting worse are the ones where the vendor is trying to force upgrading to a more expensive version (Microsoft Windows) or trying to lure a larger more casual market that is attracted by flashy things (Slashdot Beta).

      • (Score: 2) by VLM on Monday April 30 2018, @12:26PM (2 children)

        by VLM (445) on Monday April 30 2018, @12:26PM (#673690)

        For example, suppose you're a waiter in a restaurant

        In a way, you're kinda making my point for me, that the original solution of cash on the barrel let the host focus on interpersonal hosting social interaction instead of ever more detailed and intrusive impersonal accounting analysis which can be slightly mitigated via ever more complicated and harder to use tech, but the root cause is not nearly improved. The long term goal is for the average restaurant server to spend almost all their time typing up TPS report header change memos and follow banking KYC detailed documentation and reporting guidelines for ever more abstract payment methods, while the food is ordered by the victim ^H ^H ^H customer on a tablet/phone and a robot drone delivers it.

        The fundamental failure with the model is putting way too much effort into turning hosting entertainers into ever better accounting clerks with ever more elaborate accounting systems. No POS system features ever lured in customers unlike wearing 37 pieces of flair or breast-aurants or whatever goofy gimmick sells microwaved Sams Club "food".

        In all honesty much like nicer restaurants have long had specialized labor to cook, clean, tend bar and mix weird drinks, bust tables, serve wine, all manner of tasks, "real" restaurants would be better served by having servers serve while some accounting dude handles nothing but payment and weird special orders, such that if some goofball wants to take out a payday loan using a wire transfer of Danish Kroner as collateral to pay for dinner and drinks, well, fine, there's a real accounting clerk dedicated to weird accounting tasks while the servers focus on serving.

        My view is that we're in the opposite situation with billions of people more gainfully employed due to technology than they were 50 years ago

        A lot of recent propaganda that AI is going to result in everyone getting fired.

        we're seeing people doing tasks that weren't worth doing in the past.

        In a nutshell, thats exactly the TPS report header middle management infighting lack of productivity combined with a healthy dose of trying to turn restaurant servers into some weird variation of human portable ATM or wanna be credit union desk clerk.

        • (Score: 1) by khallow on Monday April 30 2018, @01:49PM (1 child)

          by khallow (3766) Subscriber Badge on Monday April 30 2018, @01:49PM (#673719) Journal

          with a healthy dose of trying to turn restaurant servers into some weird variation of human portable ATM or wanna be credit union desk clerk.

          If it works, then who cares if it is weird? The problem here is that you are operating under the assumption that these jobs are less efficient and productive than they were. That often is the case, but it's not always the case.

          And let us keep in mind that businesses are universally not in the habit of giving money away. They perceive value to be had from employing people to chase TPS reports or whatever. Those perceptions are sometimes in error, but not because they feel the need to keep someone cooling their heels on some zero-productivity activity.

          • (Score: 2) by VLM on Monday April 30 2018, @02:32PM

            by VLM (445) on Monday April 30 2018, @02:32PM (#673742)

            If it works, then who cares if it is weird?

            Surely the maximum efficiency productivity and profit model for a restaurant is McDonalds but it would be really sad if every other restaurant experience in the world disappeared.

            And let us keep in mind that businesses are universally not in the habit of giving money away.

            In sole proprietor businesses the guy making operational decisions is the guy wanting profit, in any larger structure the two goals are further apart, such that in hyper merger modern world the dude who wants profit is like 15 levels of management away from the dude who wants power or an easier day at work or just wants to pencil whip the whole thing. The old commie model of "we pretend to work they pretend to pay us" isn't really strictly speaking commie, its more a feature of large hyper-merged corporations. So yeah, the lectures in the movie "office space" about number of pieces of flair or TPS report headers have nothing to do with making money, and thats not an exception but more of a general rule.

            And... bringing it all back around to the original topic, thats how AI is going to be deployed, not to make money and unemploy everyone but to implement more bad decisions faster, more or less. Kinda like the role of modern (post 2010 era) IT in a business.

  • (Score: 0) by Anonymous Coward on Monday April 30 2018, @12:23AM

    by Anonymous Coward on Monday April 30 2018, @12:23AM (#673528)

    Wake me up when you can tell an AI to 'learn chess' and it does. Then turn around and say 'hey drive my car ' and it figures out how to do that. Then we can say we have AI. Maybe then I will get a bit worried.

    What we have now are optimizing linear regression finding trees. Basically try 'interesting' stuff with a weighting.

    This is good example of what we are doing https://www.youtube.com/watch?v=R9c-_neaxeU [youtube.com] This is basically a stripped down simplified crash course on what we are doing at scale.

    Make no mistake this sort of optimization finding is very interesting and very useful. But not the AI we 'want'.

  • (Score: 0) by Anonymous Coward on Monday April 30 2018, @08:36AM

    by Anonymous Coward on Monday April 30 2018, @08:36AM (#673644)

    Neural networks is not AI. It's simply statistics. It's easier to see in a single level neural network, where each weight simply tells that given this input, there's this much probability of this output, but multi-layer neural networks work in the exact same way, it just becomes "given this input, there's this much probability that the correct answer is in this general direction".

    Yes, neural networks are modeled on what we know about the brain, but we are not anywhere near creating any form of artificial intelligence. Something is missing. And that's a good thing, because we have no idea how to handle it if we were to succeed in making AI. Should it have human rights? If it's intelligent (that's what the I in AI stands for), why wouldn't it. Doesn't that include the right to life, aka. the right not to be switched off? And what if it decides that we are no more intelligent than dolphins or several other species that don't have human rights, and thus should have no more rights than they do?

(1)