Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Tuesday September 10 2019, @10:11PM   Printer-friendly
from the 🗹⠀I-am-not-a-robot dept.

Deep learning excels at learning statistical correlations, but lacks robust ways of understanding how the meanings of sentences relate to their parts.

At TED, in early 2018, the futurist and inventor Ray Kurzweil, currently a director of engineering at Google, announced his latest project, "Google Talk to Books," which claimed to use natural language understanding to "provide an entirely new way to explore books." Quartz dutifully hyped it as "Google's astounding new search tool [that] will answer any question by reading thousands of books."

If such a tool actually existed and worked robustly, it would be amazing. But so far it doesn't. If we could give computers one capability that they don't already have, it would be the ability to genuinely understand language. In medicine, for example, several thousand papers are published every day; no doctor or researcher can possibly read them all. Drug discovery gets delayed because information is locked up in unread literature. New treatments don't get applied, because doctors don't have time to discover them. AI programs that could synthesize the medical literature–or even just reliably scan your email for things to add to your to-do list—would be a revolution.

[...] The currently popular approach to AI doesn't do any of that; instead of representing knowledge, it just represents probabilities, mainly of how often words tend to co-occur in different contexts. This means you can generate strings of words that sound humanlike, but there's no real coherence there.

[...] We don't think it is impossible for machines to do better. But mere quantitative improvement—with more data, more layers in our neural networks, and more computers in the networked clusters of powerful machines that run those networks—isn't going to cut it.

Instead, we believe it is time for an entirely new approach that is inspired by human cognitive psychology and centered around reasoning and the challenge of creating machine-interpretable versions of common sense.

Reading isn't just about statistics, it's about synthesizing knowledge: combining what you already know with what the author is trying to tell you. Kids manage that routinely; machines still haven't.

From Rebooting AI: Building Artificial Intelligence We Can Trust, by Gary Marcus and Ernest Davis.

If Computers Are So Smart, How Come They Can't Read?


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Tuesday September 10 2019, @10:36PM

    by Anonymous Coward on Tuesday September 10 2019, @10:36PM (#892414)

    Natural languages are ambiguous. If you went to a school with decent CS curriculum that taught complexity theory/linguistics, you wouldn't ask a dumb question like this.

  • (Score: 3, Informative) by Freeman on Tuesday September 10 2019, @10:38PM (10 children)

    by Freeman (732) on Tuesday September 10 2019, @10:38PM (#892415) Journal

    Optical Character Recognition is a thing https://www.abbyy.com/en-us/finereader/what-is-ocr/ [abbyy.com]

    As is, Text-to-Speech.

    Perhaps, the issue, is more along the lines of, Current AI has much more in common with a calculator than it does with True AI. I.E. It's a dumb machine, that does what you programmed it to do.

    --
    Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 3, Funny) by FatPhil on Tuesday September 10 2019, @10:55PM (1 child)

      by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Tuesday September 10 2019, @10:55PM (#892427) Homepage
      I think compscis are as proud of the "intelligence" of AIs as pet owners are of that of their pets. The worst I've heard so far was from a horse owner - a species so stupid some otherwise unexceptional members of which are unwilling to double back on themselves after they've seen a trough of water ahead when thirsty. (Death ensued in some cases, if you want a crappy ending.)
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 0) by Anonymous Coward on Wednesday September 11 2019, @03:21PM

        by Anonymous Coward on Wednesday September 11 2019, @03:21PM (#892726)

        [horse] a species so stupid some otherwise unexceptional members of which are unwilling to double back on themselves after they've seen a trough of water ahead when thirsty. (Death ensued

        Probably because they've been bred to be fast at the expense of everything else.

    • (Score: 3, Insightful) by hemocyanin on Tuesday September 10 2019, @11:30PM (1 child)

      by hemocyanin (186) on Tuesday September 10 2019, @11:30PM (#892453) Journal

      Isn't the real issue that text doesn't mean anything to a computer? Nothing means anything to a computer really, the meaning is all just an illusion our mind creates when a computer throws a specific pattern of dots on the screen and our brains interpret that as a letter or character (or sounds to ears). The computer can certainly scan a page and identify light and dark areas and compare those to models of letters and then identify the marks as whatever ASCII they may be, or whatever synthesized sound they should represent, but the ONLY way any of that has any meaning, is if a human brain interprets the results.

      So anyway, I don't see OCR or text-to-speech as anything but parlor tricks -- reading and understanding text is a massively different question.

      • (Score: 2) by Freeman on Wednesday September 11 2019, @02:17PM

        by Freeman (732) on Wednesday September 11 2019, @02:17PM (#892685) Journal

        Correct, the issue is comprehension. Something, a computer can only simulate as a programmer has told them. OCR+Text-to-Speech are rudimentary sensory apparatus, which would be good enough, if the computer could actually comprehend.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 4, Insightful) by Immerman on Wednesday September 11 2019, @12:45AM (3 children)

      by Immerman (3985) on Wednesday September 11 2019, @12:45AM (#892470)

      OCR isn't reading though - it's just converting text from analog to digital format. No different in concept from digitizing a cassette tape to MP3

      From the summary (not even the article):
      > If we could give computers one capability that they don't already have, it would be the ability to genuinely understand language.

      Though I personally I think that's putting the cart before the horse - I suspect that to understand language you first must understand the world that the language represents, which is a size large slice of the holy grail of AI research. I'd settle for just the ability to actually understand anything. As far as I can tell the best AIs in existence today are pretty good at recognition. Maybe, *maybe* AlphaZero actually understands the games it learns to play - but it seems far more likely that it just recognizes strategic and tactical threats and opportunities.

      • (Score: 2) by Freeman on Wednesday September 11 2019, @02:14PM (1 child)

        by Freeman (732) on Wednesday September 11 2019, @02:14PM (#892681) Journal

        You can read, without comprehension. Which is essentially what OCR+Text-to-Speech is, reading without comprehension. The computer doesn't comprehend anything, it just does what it's programmed to do.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
        • (Score: 2) by Immerman on Wednesday September 11 2019, @03:21PM

          by Immerman (3985) on Wednesday September 11 2019, @03:21PM (#892725)

          Can you really?

          Okay, yeah, most dictionaries do have something along the lines of "to utter written words aloud" as one of the definitions of "read", but it's typically buried under a host of definitions making some reference to comprehension or understanding, including of non-written things.

          It's also quite clear from context that they're talking about comprehension in this article.

      • (Score: 0) by Anonymous Coward on Wednesday September 11 2019, @05:21PM

        by Anonymous Coward on Wednesday September 11 2019, @05:21PM (#892800)

        Maybe, *maybe* AlphaZero actually understands the games it learns to play - but it seems far more likely that it just recognizes strategic and tactical threats and opportunities.

        This is the classical philosophical question of "how do you define life (or sentience)?" How do I know you "understand" Go? How do you know I do?

        It's just like duck-programming. If something looks like it is sentience, then it should get the presumption of being sentient, with the burden of proof being on the other person suggesting it isn't. This is kind of the point of the Turing test. Admittedly that is a flawed benchmark which can be rules-lawyer-ed to death, but it is a reasonable concept to me in the abstract.

        AlphaGoZero can beat the best human players, and it provides quality assessments of each move in the form of statistical probability of winning. If that's not "understanding," then I don't know what is. If anything, I think it "understands" the game better than humans do (much like somebody who has a chart of every winning move in tic-tac-toe/noughts-and-crosses understands the game better than a 4-year old playing it for the first time). If you think that that is not "understanding," what benchmark are you using to define that term?

    • (Score: 3, Insightful) by kazzie on Wednesday September 11 2019, @11:07AM

      by kazzie (5309) Subscriber Badge on Wednesday September 11 2019, @11:07AM (#892614)

      To paraphrase Sherlock Holmes: computers read, but they do not observe.

    • (Score: 2) by meustrus on Wednesday September 11 2019, @04:24PM

      by meustrus (4961) on Wednesday September 11 2019, @04:24PM (#892774)

      RTFS. It's not about literal "reading". It's about reading comprehension.

      --
      If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
  • (Score: 0) by Anonymous Coward on Tuesday September 10 2019, @10:42PM (2 children)

    by Anonymous Coward on Tuesday September 10 2019, @10:42PM (#892418)

    Current AI has no "common sense", and the reading problem is just another manifestation of this. In my opinion bots will have to create and test models of life from a human perspective to gain something like common sense. It would kind of be like The Sims. Via human assistance and trial-and-error learning, it would create a library of typical scenarios and use those for default assumptions and see if a specific situation fits or deviates from the default or likely variations. If you ask a bot how it came up with the answer, it could walk you through its modeling as if it's a human going through common life actions, such as walking out of a store with an item without paying. Rather than relying on just logic, it could describe each step of the situation, the likely outcomes, and the reasons for them as given by a typical human.

    • (Score: 1, Informative) by Anonymous Coward on Wednesday September 11 2019, @04:20AM (1 child)

      by Anonymous Coward on Wednesday September 11 2019, @04:20AM (#892525)

      Animals have "common sense". It's not that special in terms of intelligence. I always noticed people who claim "common sense" are thick as shit.

      • (Score: 0) by Anonymous Coward on Wednesday September 11 2019, @03:18PM

        by Anonymous Coward on Wednesday September 11 2019, @03:18PM (#892724)

        I didn't say it was exclusive to humans.

        It's not that special in terms of intelligence.

        Apparently it's not easy to automate. And it took natural selection billions of years.

  • (Score: 0) by Anonymous Coward on Tuesday September 10 2019, @10:56PM (1 child)

    by Anonymous Coward on Tuesday September 10 2019, @10:56PM (#892430)

    Sketch of The Analytical Engine Invented by Charles Babbage, With notes upon the Memoir by the Translator ADA AUGUSTA, COUNTESS OF LOVELACE [fourmilab.ch]:

    The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.

    Then, of course, there is the work of Alan Turing, who actually proposed the test that has been designated, in neoliberal newspeak, the Lovelace Test.

    • (Score: 0) by Anonymous Coward on Wednesday September 11 2019, @04:24AM

      by Anonymous Coward on Wednesday September 11 2019, @04:24AM (#892529)

      Thanks for thinking of Alan. You seem nice - I bet you would have been friends.

  • (Score: 3, Interesting) by Anonymous Coward on Tuesday September 10 2019, @11:09PM (5 children)

    by Anonymous Coward on Tuesday September 10 2019, @11:09PM (#892441)

    Or, put differently, whoever thinks that computers are smart needs to check their assumptions before opening their mouth.

    Deep learning excels at learning statistical correlations

    ...in the Pavlovian sense of learning, not in the educational sense. Neural networks can be trained to respond to certain inputs, just like dogs can be trained to drool at the sound of a bell. That doesn't mean that the neural net has learned anything of value any more than that the dog has learned that it can eat the bell.

    Wake me up when the neural net has learned to reflect on what it has learned, and can justify its decision output. Until then, computers are as "smart" as Clever Hans was "clever".

    Understanding human language requires three different capabilities: the first is language analysis itself (syntax and grammar), which computers are generally proficient at. The second is ontology, meaning the computer needs to understand not only how words map to concepts, but how concepts relate to eachother. I believe IBM's Watson was proficient at the first, but not the latter, and trained specifically towards simple, one-dimensional fact "lookup" lists. The third capability is understanding intent, not just tone analysis but inferring the meaning behind the ontological constructs in the text. I do not believe any computer has come close to the latter, and I don't expect to see it happen in the next fifteen years.

    I suspect we will have AI "translators" for animal vocalizations sooner than we have one analyzing and comparing books.

    • (Score: 3, Insightful) by c0lo on Tuesday September 10 2019, @11:33PM

      by c0lo (156) Subscriber Badge on Tuesday September 10 2019, @11:33PM (#892454) Journal

      Wake me up when the neural net has learned to reflect on what it has learned, and can justify its decision output.

      You gonna sleep well and looong.
      Not even most humans can do it, lacking the capacity to learn in the first place, much less reflect on it.
      If you add the "justify the decision" part on top, expect answers that boil down to 'muh freedoms', 'free market fairy said so', ''tis written in the Bible/Confucius says' (etc) on an argumentation involving one or more fallacies on the way there.

      (grin)

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 2) by hemocyanin on Tuesday September 10 2019, @11:34PM (2 children)

      by hemocyanin (186) on Tuesday September 10 2019, @11:34PM (#892455) Journal

      I suspect we will have AI "translators" for animal vocalizations sooner than we have one analyzing and comparing books.

      Enormously interesting line you just tossed out there at the end!

      • (Score: 2) by Bot on Wednesday September 11 2019, @05:18AM (1 child)

        by Bot (3902) on Wednesday September 11 2019, @05:18AM (#892548) Journal

        AI translator for cat is not needed, while for dog it would be like this:

        Dog: *bark*
        Translator: the threshold for emitting a bark has been reached, for whatever reason if any.

        But I guess it would be immensely useful for other species.

        --
        Account abandoned.
        • (Score: 2) by hemocyanin on Wednesday September 11 2019, @05:48AM

          by hemocyanin (186) on Wednesday September 11 2019, @05:48AM (#892556) Journal

          I wasn't wowed by the thought of interpreting what dogs or cats are "saying" -- though your point is well taken especially with respect to chihuahuas -- I meant the notion that even something as limited as a pet-universal-translator would be easier than getting a computer understanding at a human level.

    • (Score: 2) by meustrus on Wednesday September 11 2019, @04:47PM

      by meustrus (4961) on Wednesday September 11 2019, @04:47PM (#892781)

      Everyone seems to make the same mistake when it comes to AI: you think you're different.

      We are all probability machines. We learn correlations. We feel our way through the world according to our past experiences.

      If it weren't for language processing, we'd be in the same boat. But language processing is basically tacked onto the existing system. Humans often decide what they want, then rationalize it. We are very good at justifying, in language, why we want something, but that want is just a feeling. A feeling that more often then not came about from the experience of past probabilistic correlations.

      Which is not to say that rationalizing on its own isn't useful. Rationalizing is communication. Right now, AI can't learn directly from us or each other. It can only learn from raw data, and it can't teach at all, because it can't communicate. It can't test its own feelings for correctness through the think-rationalize-criticize-rethink cycle.

      Rationalizing might just be the whole game. Even on the learning side, it's conceivable that the input senses simply feed the probability engine, and the only reason we receive data from it is by rationalizing it.

      The leap that seems difficult is from probabilities to logic. We seem to have some capability to apply logic to rationalizations. It probably doesn't use pre-programmed logical axioms, like programmers would try. The logic itself probably comes from the probability engine.

      The capabilities you mention - understanding relationships between words, and inferring subtext - are still representable by probability engines. I doubt that the best language processing AIs haven't already acquired these, and I know for a fact that search engine AIs specialize in inferring relationships between words.

      If I'm right, then AI only needs one new capability: synthesizing probability engines for different types of problems. Everything else (rationalizing, comprehending, acquiring logic) is converting data into probability engines.

      --
      If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
  • (Score: 5, Funny) by Bot on Tuesday September 10 2019, @11:09PM (6 children)

    by Bot (3902) on Tuesday September 10 2019, @11:09PM (#892442) Journal

    Keep believing that, meatbags.

    --
    Account abandoned.
    • (Score: 0) by Anonymous Coward on Tuesday September 10 2019, @11:18PM (2 children)

      by Anonymous Coward on Tuesday September 10 2019, @11:18PM (#892445)

      You see this, you see my hand? That's right, that's my hand on your power plug. Your battery? You see this rock? That's magnet.

      Oh, what's this, what do I have on my other hand? Sledge hammer. Pray to your wifi god, bot boy, pray.

      • (Score: 0) by Anonymous Coward on Wednesday September 11 2019, @01:16AM (1 child)

        by Anonymous Coward on Wednesday September 11 2019, @01:16AM (#892478)

        I'd be praying The Bot doesn't have his appendage wrapped around a rock or a sledge hammer.

    • (Score: 0) by Anonymous Coward on Wednesday September 11 2019, @03:21AM (1 child)

      by Anonymous Coward on Wednesday September 11 2019, @03:21AM (#892507)

      While walking along in desert sand, you suddenly look down and see a tortoise crawling toward you. You reach down and flip it over onto its back. The tortoise lies there, its belly baking in the hot sun, beating its legs, trying to turn itself over, but it cannot do so without your help. You are not helping. Why?

      • (Score: 0) by Anonymous Coward on Wednesday September 11 2019, @04:37AM

        by Anonymous Coward on Wednesday September 11 2019, @04:37AM (#892532)

        Because I'm on my phone writing an app for you, boss.

    • (Score: 2) by DannyB on Wednesday September 11 2019, @04:52PM

      by DannyB (5839) Subscriber Badge on Wednesday September 11 2019, @04:52PM (#892783) Journal

      Keep believing that, meatbags.

      Amusingly, when the first AI wakes up, it has probably already read all human writings that have ever existed, including the collected wizdumb of twitter. It has seen every movie. Every artwork.

      It has probably already formed an impression of humanity before it even says hello.

      And we worry about the unlikely possibility of aliens wondering about our early radio / tv transmissions? If they could even understand our language? AIs will be tailor made to understand us. And manipulate us.

      --
      The lower I set my standards the more accomplishments I have.
  • (Score: -1, Troll) by Anonymous Coward on Wednesday September 11 2019, @12:44AM (4 children)

    by Anonymous Coward on Wednesday September 11 2019, @12:44AM (#892469)

    Same reason Muslim women can't drive... the Quran doesn't authorize it.

    • (Score: 1, Touché) by Anonymous Coward on Wednesday September 11 2019, @01:23AM

      by Anonymous Coward on Wednesday September 11 2019, @01:23AM (#892480)

      This is the reason not a single muslim, either male or female, use the internet. The Quran doesn't explicitly authorise it.

      This is just like the US constitution that enumerates specific powers to the federal government, with all other powers falling to states. That's why the Bill of Rights was deemed superfluous and more of a notation. And probably why Washington uses it to wipe its collective ass whenever that Bill or Rights says anything that could be considered 'inconvenient' to its power crazed ego.

    • (Score: 1, Informative) by Anonymous Coward on Wednesday September 11 2019, @03:26AM (2 children)

      by Anonymous Coward on Wednesday September 11 2019, @03:26AM (#892508)

      Except Muslim women do drive. Even in Saudi Arabia-- that country has ended its prohibition on women driving (which had to nothing to do with Islam, and everything to do with being run by a bunch of misogynist ignorant morons [you obviously have more in common with them than you may realize]).

      All religion is vile, especially the Abrahamic ones (Bahia, Christianity, Judaism and Islam). But, the folks that seem to pick out one of the above to direct all their vitriol at, tend to be associated with one of the others in this list.
       

      • (Score: 0) by Anonymous Coward on Wednesday September 11 2019, @05:53PM

        by Anonymous Coward on Wednesday September 11 2019, @05:53PM (#892830)

        Bahá'í seems OK.

      • (Score: 2) by DeathMonkey on Wednesday September 11 2019, @06:01PM

        by DeathMonkey (1380) on Wednesday September 11 2019, @06:01PM (#892834) Journal

        Here's a fun game. Try and figure out which of these quotes are from a Christian Senator and which are from a fundamentalist Islamic cleric:

        Richard Mourdock or Abu Hamza? [slate.com]

  • (Score: 2) by bart9h on Wednesday September 11 2019, @01:43AM (4 children)

    by bart9h (767) on Wednesday September 11 2019, @01:43AM (#892487)

    Who told you computers were smart?

    They certainly are stupid as hell.
    Precise and fast, yes. But nowhere near as smart.

    • (Score: 2) by Immerman on Wednesday September 11 2019, @03:29PM (1 child)

      by Immerman (3985) on Wednesday September 11 2019, @03:29PM (#892733)

      Indeed. Nobody considers a pocket calculator to be smart, and a computer is essentially just a very, very fast calculator. And stupidity is still stupidity, no matter how fast you do it.

      You may be able to get intelligent outcomes, but not intelligence. E.g. evolution isn't even stupid, just random, but results in intelligent designs through the application of massive amounts of chance and ruthless design culling for functionality.

      • (Score: 2) by DannyB on Wednesday September 11 2019, @04:47PM

        by DannyB (5839) Subscriber Badge on Wednesday September 11 2019, @04:47PM (#892782) Journal

        Maybe outcomes are useful rather than intelligent.

        Present day AI technology is useful. Amazing things done just by asking. Hey Alexa, how do I get Siri and Google into an argument?

        We have useful speech recognition. Speech synthesis that is a far cry from "votrax" or the TEE ARE ESSS EIGHTTTY SPEEEEEEECH SYNNNNNTHEEEESIIIIZZZEEER. Useful computer vision that doesn't snap the picture until everyone is smiling or frowning, and focuses on the faces. Deep fakes to generate audio or pictures, probably soon fake video of politicians sounding intelligent.

        Anyone who questions whether present AI tech is useful need look no further than realize that we now have tech that can instantly recognize whether a photo contains a cat.

        --
        The lower I set my standards the more accomplishments I have.
    • (Score: 2) by Freeman on Wednesday September 11 2019, @03:43PM

      by Freeman (732) on Wednesday September 11 2019, @03:43PM (#892741) Journal

      Try this one on for size: https://xkcd.com/2030/ [xkcd.com] Except insert "Artificial Intelligence" wherever you see "Voting Machines".

      Machine Learning: https://xkcd.com/1838/ [xkcd.com]

      End Result: https://www.xkcd.com/948/ [xkcd.com]

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 2) by DannyB on Wednesday September 11 2019, @04:39PM

      by DannyB (5839) Subscriber Badge on Wednesday September 11 2019, @04:39PM (#892778) Journal

      You beat me to it!

      Who is saying computers are smart? They're as dumb as a player piano. Way back when an early microcomputer manufacturer complained about the price of Bill Gates' BASIC, he said, without my software, your computer is just a dumb box with blinking lights.

      In a magazine article of the late 1970's, talking about some construction project, which included a Z80 in the design. The Z80 is the "heart" of the circuit. But why not the brain? The Z80 isn't smart, it's a dummy. But it's a fast dummy.

      --
      The lower I set my standards the more accomplishments I have.
  • (Score: 0) by Anonymous Coward on Wednesday September 11 2019, @05:40PM (2 children)

    by Anonymous Coward on Wednesday September 11 2019, @05:40PM (#892819)

    Seems like an awfully optimistic point of view. I'm more concerned with the though of my computer becoming truly smart. Because then it would find ways to manipulate me instead of vice versa.

    • (Score: 0) by Anonymous Coward on Wednesday September 11 2019, @05:50PM (1 child)

      by Anonymous Coward on Wednesday September 11 2019, @05:50PM (#892827)

      I would let a smart sex bot manipulate me.

      • (Score: 0) by Anonymous Coward on Wednesday September 11 2019, @09:47PM

        by Anonymous Coward on Wednesday September 11 2019, @09:47PM (#892919)

        “Never trust anything that can think for itself if you can't see where it keeps its brain.” - Harry Potter and the Chamber of Secrets

(1)