Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday July 26 2019, @01:47PM   Printer-friendly
from the a-flop-by-any-other-name... dept.

Submitted via IRC for SoyCow1984

Sense and compute are the electronic eyes and ears that will be the ultimate power behind automating menial work and encouraging humans to cultivate their creativity.

These new capabilities for machines will depend on the best and brightest talent, and investors who are building and financing companies aiming to deliver the AI chips destined to be the neurons and synapses of robotic brains.

Like any other Herculean task, this one is expected to come with big rewards. And it will bring with it big promises, outrageous claims and suspect results. Right now, it's still the Wild West when it comes to measuring AI chips up against each other.

[...] A metric that gets thrown around frequently is TOPS, or trillions of operations per second, to measure performance. TOPS/W, or trillions of operations per second per Watt, is used to measure energy efficiency. These metrics are as ambiguous as they sound.

What are the operations being performed on? What's an operation? Under what circumstances are these operations being performed? How does the timing by which you schedule these operations impact the function you are trying to perform? Is your chip equipped with the expensive memory it needs to maintain performance when running "real-world" models? Phrased differently, do these chips actually deliver these performance numbers in the intended application?

Source: https://techcrunch.com/2019/07/19/powering-the-brains-of-tomorrows-intelligent-machines/


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by AthanasiusKircher on Friday July 26 2019, @07:16PM

    by AthanasiusKircher (5291) on Friday July 26 2019, @07:16PM (#871589) Journal

    Needless to say, just because they say it's X neurons doesn't mean you can suddenly create a "strong AI" at any of these levels. It may be a highly flawed simulation of neuronal and synaptic activity that can't produce "consciousness" or whatever the "secret sauce" of "intelligence" is.

    So many times, THIS.

    I'm not at all going to criticize advances in AI tech so far, which have been significant. But we've been here many times before -- when all we need to do is have THIS, and then we'll magically get strong AI.

    I'm not quite sure what an AI "neuron" means in all of these discussions, but frequently all it means is a dumb algorithm with some sort of weighted response derived from the data it was "trained" on. Maybe that's all brain neurons do too, but I doubt it. Moreover, the interaction types are crucial.

    A chessboard has 64 squares and 32 pieces to go with it. By these metrics, it seems like a pretty "simple" device. Yet how much computing power did it take to be able to effectively play with these 32 pieces that each have a single "parameter" with 64 possible values in terms of their board location? The rules of chess aren't that hard to encode either. But creating "intelligent" interaction among those 32 components with 64 parameter values is quite difficult.

    Obviously chess isn't just about the number of pieces, but my point is that having more "pieces" in a device doesn't make it necessarily "smart" unless you understand the possible interactions and model them effectively. I don't know where AI chess algorithms are now, but I know in the 1990s when they were first having those high-profile matches against champions, IBM was basically combining a "brute-force" examination of possibilities on a scale and speed that humans could never match with a library of previous games, strategies, and situations that again even a grandmaster could never memorize.

    I mean, obviously we don't exactly know how the brain of a human champion encodes its information or figures out strategies. But I sincerely doubt it's anything like the early "AI" approaches that required insane amounts of computing power for the time.

    What if neurons are even just a little different from the way we tend to model them in most AI algorithms? What if they behave with very simple rules like the 32 pieces on a chessboard each have, and those rules dictate other strategies about computing using real brain neurons that are key to introducing minor bits of complexity that suddenly takes a checkered board and a bunch of wooden pieces and elevates them to an intelligence or even an "artform"?

    The number of "neurons" might not matter much at all. To get significant "intelligence" or even "consciousness" you might need orders of magnitude less that we're talking about here, if we only modeled the interaction (and their basic functions) correctly. Or maybe we need a lot more computing power than we think because individual neurons are much more complex in their self-contained functioning than our typical AI models.

    It's all guessing in the dark right now.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2