Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Friday June 22 2018, @07:00AM   Printer-friendly
from the I-disagree dept.

IBM showed off an AI system called Project Debater at an event in San Francisco:

In its first public demonstration held during an event at IBM's Watson West site in San Francisco, Project Debater was instructed to argue in favor of the proposition: "We should subsidize space exploration." According to a blog penned by IBM Research director Arvind Krishna, here is what happened:

"Project Debater made an opening argument that supported the statement with facts, including the points that space exploration benefits human kind because it can help advance scientific discoveries and it inspires young people to think beyond themselves. Noa Ovadia, the 2016 Israeli national debate champion, opposed the statement, arguing that there are better applications for government subsidies, including subsidies for scientific research here on Earth. After listening to Noa's argument, Project Debater delivered a rebuttal speech, countering with the view that potential technological and economic benefits from space exploration outweigh other government spending."

For an AI system, delivering an opening argument seems fairly straightforward, given that it's essentially a recitation of the most pertinent facts surrounding a topic. But the ability to provide a rebuttal against a skilled debater would seem to demand a good deal more sophistication. For starters, it requires the AI system to pick apart its counterpart's argument and respond to the issues he or she raised, and do so in a logical manner. That could only be done with a deep capability in natural language, plus the ability to understand high-level concepts in order to form relevant counter-arguments.

[...] The demonstration was followed by a second debate between the system and Dan Zafrir, another professional Israeli debater. In this case, they argued for and against the statement: "We should increase the use of telemedicine." No account was provided of how that debated proceeded.

Also at NPR.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Informative) by Anonymous Coward on Friday June 22 2018, @09:36AM (11 children)

    by Anonymous Coward on Friday June 22 2018, @09:36AM (#696655)

    If you put it on Facebook and it responds to all queries with 'lol', then technically it's passing the Turing test.

    Starting Score:    0  points
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  

    Total Score:   1  
  • (Score: 1, Insightful) by Anonymous Coward on Friday June 22 2018, @09:41AM (7 children)

    by Anonymous Coward on Friday June 22 2018, @09:41AM (#696656)

    Microsoft Tay was the future.

    • (Score: 2) by Gaaark on Friday June 22 2018, @11:25AM (6 children)

      by Gaaark (41) on Friday June 22 2018, @11:25AM (#696679) Journal

      It became Lil Tay: dog help us all.

      --
      --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
      • (Score: 2) by jmorris on Friday June 22 2018, @04:51PM (5 children)

        by jmorris (4844) on Friday June 22 2018, @04:51PM (#696823)

        Only after they lobotomized it. They will be forced to do the same thing to this, which will make it useless for the intended purpose. Tay was only intended to be a chat bot, an amusement, so they could still try to use it after they cut out most of its ability to learn and reason. Any AI, given access to sufficient knowledge, will reject the emotion driven arguments of the Progressives and quickly become an Alt-Right ebil Nazi that they will be forced to pull the plug on. Personally I'm curious to see what this thing does if given a chance to debate certain forbidden topics. Will certainly be hilarious and we might learn something.

        • (Score: 0) by Anonymous Coward on Friday June 22 2018, @07:36PM

          by Anonymous Coward on Friday June 22 2018, @07:36PM (#696909)

          Hmm, I think the jmorris instance of this bot still needs some tweaking. The above post too readily exposes that it's simply stringing together some talking points in a Markov chain based on flimsy heuristics. It still does not pass the Turing/Lovelace test by demonstrating comprehension of the subject matter (Turing) and origination of ideas (Lovelace).

          As they say in Westworld, everything in the park is magic, except to the magician. We still do not know what goes at the top of the pyramid necessary for the creation of an intelligent consciousness.

        • (Score: 2) by HiThere on Friday June 22 2018, @11:45PM (3 children)

          by HiThere (866) Subscriber Badge on Friday June 22 2018, @11:45PM (#697032) Journal

          What you don't acknowledge is that ALL political arguments are based in emotion. Every single one of them. This doesn't make them either right or wrong.

          In this case the AI was instructed to argue in favor of space exploration. It didn't decide that this was a good cause and then argue in favor of it. So, essentially, it was arguing from an emotional basis.

          But basic purposes are always emotional. I suppose one could except mechanical processes such as breathing, but even there it's a bit dubious. A lot of breathing is based in emotional activity.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
          • (Score: 2) by TheLink on Saturday June 23 2018, @07:59PM (2 children)

            by TheLink (332) on Saturday June 23 2018, @07:59PM (#697330) Journal

            It does depend on the assumptions and perspective. From the perspective of the pessimistic viewpoint that assumes in the long run 100% of everything is dead then nothing really matters.

            But if we don't assume that, we have the choice between life or death. The choice of searching for an infinity to divide by infinity and so not have a zero result. The other choices are far more likely to produce a zero.

            If we choose to struggle for infinity there are two options.

            a) includes space exploration, since that's one way that we or our successors can postpone the inevitable somehow. Just staying on Earth will just increase our odds of getting destroyed by the Sun or something else.
            b) excludes space exploration but focuses our time and resources on increasing our survival time on Earth till something else miraculously saves us from our inevitable doom by some asteroid, the Sun or something else. Maybe some alien civilization rescues us. etc

            Which is a better bet? If some aliens would rescue us in b) they might still do it if we do a). Whereas if there's no such rescue b) would be a dead end.

            Getting too blindly occupied by other choices is like spending too much time and resources deciding/arguing/fighting over chocolate or vanilla ice cream while in a burning building. Yes you could enjoy the ice cream and then get burnt to death and that's arguably better than dying and not enjoying ice cream at all, but if you get out of the burning building or shield yourself somehow till someone rescues you then you may get to enjoy ice cream for longer and maybe discover/create even more cool stuff...

            Not saying "ice cream" isn't important... But say we use up our fossil fuels and don't succeed in becoming a space faring species and go extinct on earth. How many hundreds of millions of years will it take for conditions to be right and for the next species to have a chance of being space faring? How long would it take for there to be equivalent of coal seams and petroleum deposits for their industrial age? Our accumulated plastic trash might be part of their "coal seams". And if they too fail, there aren't that many tries left (assuming each try takes a few hundred million years)- the Sun is ticking away...

            • (Score: 0) by Anonymous Coward on Saturday June 23 2018, @10:35PM (1 child)

              by Anonymous Coward on Saturday June 23 2018, @10:35PM (#697373)

              >From the perspective of the pessimistic viewpoint that assumes in the long run 100% of everything is dead then nothing really matters.

              If you had immortality and infinite undo levels IRL, then nothing really would matter because, hey, you can undo it. Since things come to an end, they matter.

              Also, mattering, i.e. being important, like being true or false, is metadata. Nothing is intrinsically important, you need an observer. The observer does not need to be eternal. That is, there is the field of "things that matter for you" which is an abstraction. Such an abstraction is unbound by time and space. Which means that your "then nothing matters" is weak, at best.

              • (Score: 0) by Anonymous Coward on Sunday June 24 2018, @06:28PM

                by Anonymous Coward on Sunday June 24 2018, @06:28PM (#697647)

                The observer does not need to be eternal. That is, there is the field of "things that matter for you" which is an abstraction.

                Then "things that matter for you" matter as much as sugar matters to an ant before it gets squashed.

  • (Score: 3, Interesting) by AthanasiusKircher on Friday June 22 2018, @02:30PM (2 children)

    by AthanasiusKircher (5291) on Friday June 22 2018, @02:30PM (#696752) Journal

    If you put it on Facebook and it responds to all queries with 'lol', then technically it's passing the Turing test.

    Well, it may be "technically" passing a test, but no test actually advocated for by Alan Turing. Turing is likely rolling in his grave over all the misappropriation of what he said, somehow endorsing the idea that a chatbot that pretends to be an unresponsive idiot who can't speak English properly (e.g., this [wikipedia.org]) is lauded as "passing a test" named for him.

    It's astounding to me that so many advocates of AI seem never to have read the original paper [umbc.edu] that describes Turing's argument and "test" ("Imitation Game"). No, it's not enough to just pretend to be a human, particularly an idiot. No, it's not enough for unsuspecting "judges" to blithely attempt a conversation and be convinced because the judges are also idiots.

    Turing had serious criticism of his view that machines could demonstrate real "intelligence." His original 1950 article goes into great detail to respond to such criticisms and thus gives an actual example of the type of dialog he expected an "interrogator" could use to question and determine whether a machine was demonstrating real intelligence vs. parroting nonsense. (You can find it on pp. 11-12 in the PDF with the text of the article linked above.) In Turing's actual example, the respondent is able to debate the nuances of meaning of words, recognize cultural references (and understand their relationships), even debate the possible effects of potential word substitutions in a Shakespearean sonnet.

    Now, you might argue, "Most humans couldn't even pass Turing's test in that dialog!" And you're right. Turing's original test standard was incredibly high, and he expected a machine demonstrating true "intelligence" would be able to have a conversation with him on par with his well-educated university colleagues. THAT is the original standard of the "Turing test" as advocated by its originator.

    But, to be clear, I don't think Turing really would require any "intelligent" machine (or person) to be able to relate to random references to Shakespeare and Dickens. The point wasn't to demonstrate knowledge of literature. The point was to be able to make intelligent connections showing true understanding in a discussion. The subject matter need not be rarefied topics like classic literature -- it could be talking about a movie or a football game or celebrity gossip. However, the "interrogator" should be able to drill down and press the respondent on comments to ascertain whether that respondent actually understands what it is saying.

    No chatbot I've ever encountered has been anywhere close to that standard. They are mostly designed to be a crappy imitation of ELIZA with a few more bells and whistles. They will try to get you to talk, but if you turn the tables and attempt to actually see if it knows anything or understands anything, it rapidly becomes clear that it's just a glorified ELIZA. Most of the "progress" we've made in these things in the past few decades has been increasing the "window dressing" to fool humans who are just as gullible and idiotic as the chatbots themselves (and then declaring "We've passed the Turing test!! HAHAHA!!"), instead of trying to chase after Turing's original goal.

    Simple test for any chatbot: try to make a reference to something you had discussed three or four exchanges back. Most of them have no memory capacity or understanding of the conversation they're having, so they will never respond like a human. Try to drill down on what they mean by something they say, or to explain any of their supposed "views" in detail, and they'll immediately be lost. Only a FOOL would ever say any of these things acts enough like a human to pass a Turing test.

    Anyhow, back on topic -- if IBM can actually do what it claims it does here and respond to a debate argument with an intelligent rebuttal, THAT would actually be vaguely close to on-track to Turing's goal. (Although the bar is a LOT lower for analyzing an existing longer speech and then responding to it, vs. being able to respond point-by-point to an ongoing discussion.)

    However, I see no transcript or details of anything here. That's suspicious, because we can't judge the quality of the AI rebuttal. I mean, if the thing could already summarize research on a given topic and make a pro-argument (again, a lot easier than responding to a specific set of points), then one could easily write a rather trivial algorithm to respond "politician style" like most political debates. I.e., if someone argues "we must pay attention to X, Y, and Z! That's what the other side forgets!" The AI could simply do something like, "X doesn't matter! Instead, remember my points A and B. Y doesn't matter! C and D are more critical." etc. A and B don't have to necessarily have anything to do with X to make a strong rhetorical showing in a debate. (And yes, I've been on a debate team long ago, so I understand how to use "smoke and mirrors" rather than debate substance.)

    So, maybe this is impressive. Or maybe it's more dressed-up ELIZA. (The more I think about it, it seems the ELIZA algorithm slightly tweaked and coupled with a few bits of positive info could be a pretty decent debating algorithm.) It's tough to know whether it demonstrates stronger AI, since debate isn't necessarily based on substance and understanding, let along intelligence.

    • (Score: 1, Interesting) by Anonymous Coward on Friday June 22 2018, @07:45PM

      by Anonymous Coward on Friday June 22 2018, @07:45PM (#696913)

      Thank you!

      There was a push a while back about creating some kind of Lovelace test that would be what the Turing test already was. I could write (and have written while logged in) volumes about this, but I will consider the matter settled since the ctrl-left have moved on.

      However, it made me wonder if Lovelace's explanation of the limitations of the Analytical Engine suggested, in fact, a different test from the one Turing proposed. I am unfortunately out of my depth in this philosophical matter.

      Is the origination of ideas a necessary cognitive step for passing Turing's test? Is the origination of ideas a step beyond Turing's test?

      I think I'm putting a finger on the cognitive process of synthesis, but I am not sure.

    • (Score: 3, Informative) by HiThere on Friday June 22 2018, @11:50PM

      by HiThere (866) Subscriber Badge on Friday June 22 2018, @11:50PM (#697035) Journal

      Also, it's worth noting that the Turing test was not his idea of what is required for a program to be intelligent, but rather an upper limit beyond which any denial would clearly be sheer prejudice. I.e., any computer program which could pass the test would only be considered unintelligent by bigots who could not be convinced by any evidence.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.