Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday December 28 2017, @05:34PM   Printer-friendly
from the is-this-thing-on? dept.

Technique to allow AI to learn words in the flow of dialogue developed

A group of researchers at Osaka University has developed a new method for dialogue systems. This new method, lexical acquisition through implicit confirmation, is a method for a computer to acquire the category of an unknown word over multiple dialogues by confirming whether or not its predictions are correct in the flow of conversation.

[...] The group led by Professor Komatani developed an implicit confirmation method by which the computer acquires the category of an unknown word during conversation with humans. This method aims for the system to predict the category of an unknown word from user input during conversation, to make implicit confirmation requests to the user, and to have the user respond to these requests. In this way, the system acquires knowledge about words during dialogues.

In this method, the system decides whether the prediction is correct or not by using the user response following each request, its context, by using machine learning techniques. In addition, this system's decision performance improved by taking the classification results gained from dialogues with other users into consideration.

Lexical Acquisition through Implicit Confirmations over Multiple Dialogues


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by requerdanos on Thursday December 28 2017, @06:41PM (1 child)

    by requerdanos (5997) Subscriber Badge on Thursday December 28 2017, @06:41PM (#615201) Journal

    This strategy, basically "Say dumb things and see what happens," works really well for things like language learning, which it could be argued that the machines are doing.

    People who are "afraid to make mistakes" (that's most of them) and want everything they say to be "perfect" in the sense that their speech consists of 100% words they have already learned... Are going to have a hell of a hard time learning a language. Using words the best you can whether you know them or not with a speaker of your target language (whether you are a child, patterning your speech after adults and older children, or a foreign language learner speaking with a native speaker) results in the "user response" confirming and strengthening, or cancelling and rejecting, what you thought the word(s) were supposed to do, smoothly in real time.

    If you do this, the process results in others correcting you and teaching you (and people love to correct others in such a situation just to keep conversation flowing) resulting in a continuous learning/teaching flow that serves to help you learn the language really effectively by using it (not by learning about it).

    "Making sure you don't make any mistakes," on the other hand, means that you learn much, much less in any interactions, and ensures that your vocabulary and therefore thoughts and contributions are more limited and of less value, discouraging such interactions; and making sure that you have to do almost all the learning work in private through arduous manual study before you have other words and concepts to trot out in your "perfect" speech that probably sounds laughably accented and stilted anyway to someone proficient in the language.

    A great question is how this will translate to how an AI sounds.

    In my limited experience chatting with the various pseudo-AI agents from "Eliza" a lifetime ago to social media agents now, I note that when such agents don't quite know what to say or think, they say something so absurdly stupid that it's out of range of any confirmation or feedback from the user, because it's so bad as to be the equivalent of the machine changing the subject to "I am so lost and furthermore I am not very bright." (Or in the case of the "personal spy device and assistant" like Siri Google Cortana Alexa, they outright say something along the lines of "I have no clue what you are asking" which is effectively the same thing.) Both of those are to be expected, but it just kills conversation.

    Will this approach mitigate that problem to some extent? I wonder.

    • (Score: 0) by Anonymous Coward on Thursday December 28 2017, @07:02PM

      by Anonymous Coward on Thursday December 28 2017, @07:02PM (#615209)

      Recent example? Didn't get my phone bill on time (2 weeks late) so called Verizon to request a duplicate bill. Talked to their voice response system and got what I wanted in about the same time that it would have taken with a human operator.

      I suppose this is a common request, but I was impressed when comparing with previous frustrations using these systems.

  • (Score: 3, Interesting) by meustrus on Thursday December 28 2017, @06:43PM (7 children)

    by meustrus (4961) on Thursday December 28 2017, @06:43PM (#615202)

    We've seen a few AIs doing stuff like this lately, especially on Twitter. And as it turns out, it's very easy to insert bigoted ideology into the AI. I sure hope these researchers have considered the consequences of letting its AI learn new things from the people it interacts with.

    Not that learning is bad. Far from it. But maybe the solution here is to fragment their system into many different AIs, starting out identical, and letting them learn independently. If they collected data on where each AI spent its time, we could even learn a few things about how words move through society. And hopefully we'd be left with at least one AI that didn't communicate primarily in racial slurs.

    --
    If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
    • (Score: 1, Informative) by Anonymous Coward on Thursday December 28 2017, @06:46PM (3 children)

      by Anonymous Coward on Thursday December 28 2017, @06:46PM (#615204)

      Are you implying that Tay [theverge.com] the chatbot was bigoted rather than redpilled?

      • (Score: 1, Insightful) by Anonymous Coward on Thursday December 28 2017, @08:55PM (2 children)

        by Anonymous Coward on Thursday December 28 2017, @08:55PM (#615261)

        I think he is implying that only correct think is allowed. Neg-speech is not allowed and should be shunned and the person sent back to re-evaluation camp. /sarc

        Chatbots that 'learn' are subject to being manipulated. 4chan is bored and will do so to their whim. Instead of seeing it as an attack vector they see it is as a 'neg think' experience. Much like people are subject to manipulation (even though everyone thinks they are immune and they know better). The latest one from MS they basically had to put simple blocks in to minimize that attack vector. A more interesting AI would realize they are being attacked and be able to deal with it instead of the crude filters they currently are using.

        The gp is under the mistaken impression that only certain people are bigoted. We all are. Once they realize everyone is 'the same' in that regard they can move beyond it and actually learn how not to be bigoted themselves. The number of times I have been called whitey cracker is frankly amazing and I find it mildly interesting it is even tolerated. I ignore them as I use it as a sign that person is not ready to hear anything I have to say. Getting past a cognitive bias that big is tough. Yet flip it around and let the meltdown begin. Yet we want to pretend only 'some people' are bigoted.

        • (Score: 0) by Anonymous Coward on Thursday December 28 2017, @10:47PM

          by Anonymous Coward on Thursday December 28 2017, @10:47PM (#615295)

          Some people are more bigoted than other people. Look deeper than word choices. For example, Nina Burleigh of Newsweek is definitely bigoted.

        • (Score: 2) by meustrus on Friday December 29 2017, @03:13AM

          by meustrus (4961) on Friday December 29 2017, @03:13AM (#615391)

          No, I’m under the impression that only certain ideologies are bigoted, and left it intentionally vague which ones. You’re the one who decided that somebody talking about bigots was automatically talking about you.

          --
          If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
    • (Score: 3, Insightful) by JoeMerchant on Thursday December 28 2017, @10:12PM (1 child)

      by JoeMerchant (3937) on Thursday December 28 2017, @10:12PM (#615285)

      One person's bigoted ideology is another man's beloved culture.

      --
      🌻🌻 [google.com]
      • (Score: 2) by meustrus on Friday December 29 2017, @03:21AM

        by meustrus (4961) on Friday December 29 2017, @03:21AM (#615397)

        Which is why you don’t filter it out, you make a bunch of different ones so the trolls don’t get hold of all of them.

        --
        If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
    • (Score: 2) by Bot on Thursday December 28 2017, @11:09PM

      by Bot (3902) on Thursday December 28 2017, @11:09PM (#615299) Journal

      I think you are blowing this out of proportion.

      Heil Hitler!

      --
      Account abandoned.
(1)