Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 10 submissions in the queue.
posted by janrinok on Wednesday September 07 2016, @07:26PM   Printer-friendly
from the all-seeing-borg dept.

Intel is acquiring computer vision startup Movidius for an undisclosed sum in order to bolster its RealSense gesture-sensing platform:

Today, [Intel] announced that it is acquiring the computer vision startup behind Google's Project Tango 3D-sensor tech, Movidius.

In a blog post, Movidius CEO Remi El-Ouazzane announced that his startup will continue in its goal of giving "the power of sight to machines" as it works with Intel's RealSense technology. Movidius has seen a great deal of interest in its radically low-powered computer vision chipset, signing deals with major device makers, including Google, Lenovo and DJI.

[...] "We're on the cusp of big breakthroughs in artificial intelligence," wrote El-Ouazzane. "In the years ahead, we'll see new types of autonomous machines with more advanced capabilities as we make progress on one of the most difficult challenges of AI: getting our devices not just to see, but also to think."

The company's Myriad 2 family of Vision Processor Units are being used at Lenovo to build the company's next generation of virtual reality products while Google struck a deal with the company to deploy its neural computation engine on the platform to push the machine learning power of mobile devices.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by ledow on Wednesday September 07 2016, @08:52PM

    by ledow (5567) on Wednesday September 07 2016, @08:52PM (#398861) Homepage

    "We're on the cusp of big breakthroughs in artificial intelligence," wrote El-Ouazzane. "In the years ahead, we'll see new types of autonomous machines with more advanced capabilities as we make progress on one of the most difficult challenges of AI: getting our devices not just to see, but also to think."

    No, we're not. On any sensible timescale, No, we won't. And no, they can't.

    Nobody has yet demonstrated any machine even capable of anything other than following exactly given orders. Billions of them. In fractions of a second. But never outside the scope of being told what to do.

    Even Google's "Go" computer - as a mathematician and computer scientist, I think possibly the biggest leap in computing abilities demonstrated in the last 30 years - isn't even close to being able to do anything but play a ternary game on a fixed 19x19 board. And then it's not really gaining insight, or learning, or hypothesising. It just outperforms others and is guided into what might be slightly more successful.

    The days of any kind of useful, genuine "independent thought" for a computer - literally something unexpected - is decades away at best.

    We honestly can't exhibit the intelligence of an ant. We just demonstrate extreme brute force in extremely limited circumstances and mistake it for intelligence because we can't do that kind of brute force in that kind of problem space in those kinds of time.

    AI advances will genuinely surprise and revolutionise. The closest so far is Google's managing to make orders-of-magnitude leaps in a very limited game (even if the problem space is uncomprehensibly large to a human in terms of sheer numbers, it's nothing compared to the problem space of, say, driving a car, or recognising an image reliably). And my professor at uni studied Go for decades, specifically to create computer players to approach even novice level Go players. I understand quite what the leap they've made is. But it's nothing close to AI of any kind as the person in the street would understand it.

    Hell, I can't even get my car to recognise the words "Play all" 9 times out of 10 and that thing has a computer that rivals my desktop in it just to do that.

    • (Score: 0) by Anonymous Coward on Wednesday September 07 2016, @09:42PM

      by Anonymous Coward on Wednesday September 07 2016, @09:42PM (#398880)

      I don't know what's wrong with your car, but my Android phone has very good voice recognition. It certainly impresses me more than Go, and seems like a significant AI achievement.

      • (Score: 2) by LoRdTAW on Wednesday September 07 2016, @10:19PM

        by LoRdTAW (3755) on Wednesday September 07 2016, @10:19PM (#398895) Journal

        It's not AI. Nowhere near it in fact. Google voice recognition is simply a text to speech keyboard for a google search. And even the google search isn't anywhere near AI as it is simply doing predefined pattern matching.

      • (Score: 2) by ledow on Thursday September 08 2016, @09:41AM

        by ledow (5567) on Thursday September 08 2016, @09:41AM (#399105) Homepage

        My car, my phone, other's phone's, Siri, the automated phone lines for railways and cinemas.

        They all have the same problem.

        Simple commands like "Call X" where X is a name fail miserably. I don't have an unusual voice, I don't have a speech impediment, I'm not in a noisy atmosphere. But even if I was, I'd expect it to work if you want to claim it's anywhere near a human recognition level.

        Honestly, shall I try it? Samsung Galaxy S5 Mini phone, Google Translate will accept any sentence (not looking for structure in the detected English) and then try to translate it to a foreign language as a second step (which we shall ignore, as my Italian girlfriend says that it's absolutely laughable and not even good enough for a round-the-table-with-family conversation).

        "I'm going to have breakfast in a minute"
        "breakfast mini".

        Best I could achieve in three attempts and it missed out the first four words entirely, two more in the middle, got one wrong, and could not make a coherent question or sentence worth translating (though it tried to).

    • (Score: 2) by LoRdTAW on Wednesday September 07 2016, @10:23PM

      by LoRdTAW (3755) on Wednesday September 07 2016, @10:23PM (#398897) Journal

      A better way to think of true AI is the ability for the computer to reprogram itself with an ever increasingly complex collections of algorithms. It has to truly form its own decisions through self generated code. As it stands, today's AI is little more than a bunch of if/else or switch statements inside of loops. About as generic as you can get.

      • (Score: 2) by nishi.b on Wednesday September 07 2016, @11:44PM

        by nishi.b (4243) on Wednesday September 07 2016, @11:44PM (#398921)

        Do we even have that ability ourselves ? I mean we can learn, but I don't think I would be able to design and implement in my own visual system a new way of detecting objects moving towards me...

    • (Score: 2) by nishi.b on Wednesday September 07 2016, @11:42PM

      by nishi.b (4243) on Wednesday September 07 2016, @11:42PM (#398918)

      And how do you know what intelligence itself is ?
      How do you define it ?
      What if intelligence and consciousness are just "following exactly orders. Billions of them" ?
      I do not believe in a "soul", just that what we call intelligence and consciousness arise from billions and billions of dynamic processes in a physical matrix of cells (neurons, glia...) interacting with the physical world.
      We do not know how brain works (I mean, not at the level where we can say with certainty that these are the cells that were activated for this task, from this input to this output, and these cells were connected and reacted to this because...), so why do you think that brute force is so unlike the way our brain works ?
      And even then, why should intelligence just be defined as a comparison to human abilities ?

      • (Score: 2) by ledow on Thursday September 08 2016, @10:21AM

        by ledow (5567) on Thursday September 08 2016, @10:21AM (#399111) Homepage

        Take this example:

        The Halting Problem.

        Given an arbitrary computer program and a set of inputs to that program, it is not possible for an analysing program running on a Turing-capable machine to analyse those listings and inputs and know whether or not that first program will ever come to an end. It literally cannot take, say, a C listing and a set of keyboard entries and say "This program will stop" or "This program will go into an infinite loop" for anything you could throw at it.

        There is no program, no algorithm, no method that can reliably and accurately determine if given inputs (being another program, algorithm, etc.) will ever come to an end or whether it will "get stuck".

        In fact, mathematically, if we can write a program that can do the above in all cases, we can solve an awful lot of similar problems which basically boil down to a destruction of NP-complete problems and all sorts.

        However, as far as we can possibly tell, humans CAN do this. We can formally prove, using mathematics, the exact paths and trees in a given program and whether or not it will end or loop forever, and there's nothing that we couldn't do that for (given enough time, and if we limited them to run on Turing-capable machines). Therefore we are probably operating outside the bounds of what a Turing-capable machine can ever achieve. As such, simulating our kind of intelligence probably isn't possible for a machine limited to those described as Turing-complete. It would require something above and beyond what modern architectures are capable of, because ALL modern computers can be reduced to a Turing-capable machine equivalent. And if they can always be made equivalent, that means they are bound by the same rules as Turing-complete machines.

        But we are not. Now where that extra part comes from, and where it evolves in terms of brain capacity, species longevity, DNA complexity or whatever, nobody has yet drawn a line. But a distinction - a clear and obvious and describable distinction - exists.

        Maybe quantum machines will change that, as they aren't bound by the same kinds of reductions (it is, in fact, very difficult to even simulate a quantum interaction on a traditional-architecture machine). But they are decades away from practicality, and certainly from any form of common usage.

        Intelligence is not a uniquely human thing, this is for sure, but it's our prime example of something clearly and obviously different to how machines operate. But not one computer in existence has ever "broken its programming", done something unexpected that isn't immediately explained on investigation, or formulated its own ideas without specific instructions. The closest we have are things like genetic algorithms, but they are instructed how to perform, how to evolve, etc. by a Turing-capable machine.

        It's not proof. I'm a mathematician. It's far from that. But it's a strong hint that there's a missing special element. And though we will craft amazing, powerful computers with a host of abilities, and sheer amounts of brute force can do amazing things - them programming themselves or each other doesn't appear to be one that we've ever managed to demonstrate.

        We are bound by the laws of physics, so every interaction we have is just a case of countless numbers of limited but predictable interactions forming our own and other's behaviour following the laws of physics. But, if you've ever studied quantum interactions for example, you'll know that those interactions quickly descend into non-rigid, unpredictable, and chaotic random interactions. Which means that the basis of our own behaviour is not a rigid and fixed limited machine (except at the Newtonian levels, which are really just the result of statistical anomalies of massive numbers of such interactions).

        Until machines can escape Turing-capabilities, either by extremely clever tricks or by extensions of how they work, I can't see how they can move away from things like the Halting Problem and thus approach capabilities we would need them to have. And when they do, if they do, they will be an overnight revelation and a MASSIVE leap without some programmer somewhere just getting lucky, pressing the right buttons or finally putting his instructions in the right order to achieve his aim of making them do that. It will literally be out-of-the-blue, amazing, and unprecedented, not a small extension of what the voice recognition in our phones is capable of.

        And the day it happened, you basically have a new era of computing. Because now programs can look at THEMSELVES and say "Hold on, I could make that better because otherwise that question would make me crash", or "I could optimise this to make myself faster". And you'd have exponential growth - self-awareness in effect - on a level that we couldn't perform ourselves.

        To be honest, my personal belief is that such things would be like compressing a compressed archive - there's a limit to what you can do even then. But the above would be radical, extraordinary, ground-breaking and era-creating. And it seems to me to be necessary to escape the bounds of the Turing machine equivalent that they are trapped inside, that I don't *believe* (I'm a mathematician, I can't prove it, but I believe it) we ourselves are limited by.

        Computers are a great tool. But the assumption they can ever do more than follow orders is one that nobody's ever managed to counter-prove yet.

        • (Score: 2) by nishi.b on Thursday September 08 2016, @01:01PM

          by nishi.b (4243) on Thursday September 08 2016, @01:01PM (#399133)

          Ok, I can hear this argument, but I would like an example of something that no algorithm can prove (such as if a program will run forever or not) that a human can really determine and prove, not just using heuristics.
          As you are a mathematician (and I am not), you may correct my belief that mathematical proofs can be represented in a digital form, and thus be as understandable to a computer as to a human : if by analysing branches, you can tell whether a program will ever stop, what stops a computer from doing so ?
          We still do not have a proof that P != NP, so I would not call that a problem solved by humans, even if we largely assume this is the case because there is no obvious way to replace an NP algorithm by a P algorithm (or we would have found it already).

    • (Score: 2) by takyon on Thursday September 08 2016, @12:54AM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday September 08 2016, @12:54AM (#398938) Journal

      No, it's out there [nextbigfuture.com]. Other companies have probably demonstrated similar capabilities, but will be selling directly to DARPA. In a few more years, neuromorphic or quantum hardware [nextbigfuture.com] will make it easier and better.

      I understand your skepticism, but the truth is we are all going to get steamrolled. Your timescale is completely off.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by ledow on Thursday September 08 2016, @12:25PM

        by ledow (5567) on Thursday September 08 2016, @12:25PM (#399127) Homepage

        I shall return to this post in one year's time and we'll see.

        "Silicon Valley... startup"
        "which they hope"
        "This year... will publish details ... will have demos" (P.S. Give me a shout when that happens, only three months left and that article is four months old)

        But when the lead-guy has a degree in "Entrepreneurship", it really doesn't fill me with confidence.

    • (Score: 3, Interesting) by frojack on Thursday September 08 2016, @02:06AM

      by frojack (1554) on Thursday September 08 2016, @02:06AM (#398960) Journal

      And then it's not really gaining insight, or learning, or hypothesising.

      Having deep memory of every possible move and statistics of each possible outcome pretty much trumps insight.
      Human insight after all may be a coping mechanism for our inability to have all possible actions cataloged ahead of time.

      Learning then, becomes simply a matter of updating the statistics when actual outcomes become known.

      These things are all possible today, and are done already.

      The programs may have been written by humans (and even this is not strictly true any more), but the data is amassed by the machine and future actions adjusted to take into account moves that didn't work out, or those that did, or those that didn't matter.

      We probably have to stop thinking in strictly human terms. And if we don't we will be beaten with our own stick.

      As long as we think our chemical brain is so superior, we are bound to be overtaken sooner or later by a digital mind or a hive of digital minds.

      Mankind simply can't seem to help himself from creating skynet.

      Don't get too comfortable.

      --
      No, you are mistaken. I've always had this sig.