Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday September 11 2017, @01:32AM   Printer-friendly
from the looks-like-they-blue-it dept.

It was an audacious undertaking, even for one of the most storied American companies: With a single machine, IBM would tackle humanity's most vexing diseases and revolutionize medicine.

Breathlessly promoting its signature brand — Watson — IBM sought to capture the world's imagination, and it quickly zeroed in on a high-profile target: cancer.

But three years after IBM began selling Watson to recommend the best cancer treatments to doctors around the world, a STAT investigation has found that the supercomputer isn't living up to the lofty expectations IBM created for it. It is still struggling with the basic step of learning about different forms of cancer. Only a few dozen hospitals have adopted the system, which is a long way from IBM's goal of establishing dominance in a multibillion-dollar market. And at foreign hospitals, physicians complained its advice is biased toward American patients and methods of care.

[...] Perhaps the most stunning overreach is in the company's claim that Watson for Oncology, through artificial intelligence, can sift through reams of data to generate new insights and identify, as an IBM sales rep put it, "even new approaches" to cancer care. STAT found that the system doesn't create new knowledge and is artificially intelligent only in the most rudimentary sense of the term.

Watson "has failed to end a streak of 21 consecutive quarters of declining revenue at IBM." Ouch.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by ledow on Monday September 11 2017, @08:13AM

    by ledow (5567) on Monday September 11 2017, @08:13AM (#566161) Homepage

    Notice that it doesn't say that anything actually happened. It just "seemed a lot less certain". That's pretty empty rhetoric.

    It's like suggesting that traditional electric companies ran in fear of solar startups. It's a nice image, but it's not actually true.

    The fact is that AI doesn't exist. We don't have the languages to express it, the hardware to run it, or the brains to make it. We have clever tricks, statistics and brute-force, but nothing approaching any kind of "self-thinking" / "self-learning" computer. It has to be told exactly what to do with the input data (which basically means it's just a bunch of heuristics, which somehow has become a buzz-word for AI but it basically just means "human-written rules" when applied to computers).

    Sure, it can do some clever stuff, but it's because it's sufficiently advanced heuristics. Anybody who deals in this stuff knows that it's not actually miraculous, however. It's "clever". It puts on a good show. But its use is limited. Google's AlphaGo was much more of a surprise, but it's still just the same in the end. The application outside of theoretical logic problems is quite limited and difficult to apply.

    We just don't know how to tell a computer how to make itself learn. We lack the expression of language, the language to express it in, and the interpretation that a computer can apply. If we leave everything to its own devices we end up with very, very, very limited neural networks and such-like that take years of training to do the very simplest of things and fall foul of every pitfall even so. They are unprogrammable, unteachable, and unpredictable.

    The day a machine becomes self-learning and self-aware is literally an epoch-changing moment. We'll start the "AI Age". That's not going to happen only 100 or so years after the invention of the computer. It's just not ready yet. And we can't even define how we learn or operate, let alone instruct a machine in such a way that it's then no longer reliant on our every instruction.

    ===

    Personally, I draw comparisons with AI to Turing-completeness. We're making machines that are solely Turing-complete. Everything we do is done in Turing-complete languages and hardware. But there's no evidence to suggest that any "real" intelligence is limited to only Turing-completeness. If we thinking-beings operate on some higher level, it may well be that you cannot simulate us even with the largest Turing-complete machine known to the universe. Even quantum computing is currently describable in Turing-complete terms. But what if you need something else to actually express the way we operate? What if all we can ever manage is Turing-complete tricks? Such that it "looks" intelligent but can never actually be so? Sure, we'll use it for self-driving cars or whatever, and have some utility but there will always be that upper bound that stops it progressing or doing so on its own.

    If we lack the method to describe intelligence ourselves, especially so in a format that a computer can interpret, what makes us think we can make computer-based intelligence?
    And can humans actually solve problems that Turing-complete machines cannot? The famous one that comes to mind is things like the Halting Problem. Could humans - given infinite time and resources - tell whether any given program was going to end? It seems to me that it just might be possible. If we can come up with multiple solutions that can solve it for any given program, that give definitive answers, and all come from a single intelligence, is that intelligence itself a general purpose machine that could defeat the Halting Problem (at least in theory)?

    It seems to me that there's a missing element to intelligence that our machines lack but which we may possess, and which we ourselves lack the capability to express. That doesn't mean it's some creator-given unique characteristic, but that we're just not able to express it like we can't describe many other things. It may well be - especially with clock-speed and other physical limitations - that computer AI isn't even possible with the architectures we can use for it. There's nothing in the human brain that oscillates three billion times a second, or even close, but we seem to think that's necessary to simulate a human brain. Or even an insect brain.

    Personally, I think we're barking up the wrong tree and need to find a entirely new way to look at things, rather than just throwing brute force and billions of nodes at the problem blindly.

    Starting Score:    1  point
    Moderation   +3  
       Insightful=3, Total=3
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5