Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday December 29 2020, @12:29AM   Printer-friendly
from the asimov's-three-laws dept.

The Turing Test is obsolete. It's time to build a new barometer for AI

This year marks 70 years since Alan Turing published his paper introducing the concept of the Turing Test in response to the question, "Can machines think?" The test's goal was to determine if a machine can exhibit conversational behavior indistinguishable from a human. Turing predicted that by the year 2000, an average human would have less than a 70% chance of distinguishing an AI from a human in an imitation game where who is responding—a human or an AI—is hidden from the evaluator.

Why haven't we as an industry been able to achieve that goal, 20 years past that mark? I believe the goal put forth by Turing is not a useful one for AI scientists like myself to work toward. The Turing Test is fraught with limitations, some of which Turing himself debated in his seminal paper. With AI now ubiquitously integrated into our phones, cars, and homes, it's become increasingly obvious that people care much more that their interactions with machines be useful, seamless and transparent—and that the concept of machines being indistinguishable from a human is out of touch. Therefore, it is time to retire the lore that has served as an inspiration for seven decades, and set a new challenge that inspires researchers and practitioners equally.

[...] Instead of obsessing about making AIs indistinguishable from humans, our ambition should be building AIs that augment human intelligence and improve our daily lives in a way that is equitable and inclusive. A worthy underlying goal is for AIs to exhibit human-like attributes of intelligence—including common sense, self-supervision, and language proficiency—and combine machine-like efficiency such as fast searches, memory recall, and accomplishing tasks on your behalf. The end result is learning and completing a variety of tasks and adapting to novel situations, far beyond what a regular person can do.

[...] None of this is to denigrate Turing's original vision—Turing's "imitation game" was designed as a thought experiment, not as the ultimate test for useful AI. However, now is the time to dispel the Turing Test and get inspired by Alan Turing's bold vision to accelerate progress in building AIs that are designed to help humans.

[Also Covered By]: CartEgg

Do you think the Turing Test is obsolete ? If yes, what would you replace it with ?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Interesting) by ElizabethGreene on Tuesday December 29 2020, @12:33AM (8 children)

    by ElizabethGreene (6748) on Tuesday December 29 2020, @12:33AM (#1092273)

    Train an AI to determine if the respondent is human or machine; then have it administer the Voight-Kampff ... err ... Turing test.

    • (Score: 3, Funny) by BsAtHome on Tuesday December 29 2020, @12:58AM (6 children)

      by BsAtHome (889) on Tuesday December 29 2020, @12:58AM (#1092288)

      Now I get it! They implemented planned obsolescence in the biological machines by giving them an expiration date. It was not because of the potential rebellious inclination a slave will exhibit. Its all purely based on more sales!

      No need to ask the AI. Just wait until its expiration date and it will be silent and need replacement. Thankfully, it will be silent just we wait long enough. Hey, Eliza, you'll shut up automatically, you hear me? So why again do we need a Test? They will ultimately fail by dying anyway.

      (no, I do not believe that the singularity will be achieved anytime soon)

      • (Score: 0) by Anonymous Coward on Tuesday December 29 2020, @01:21AM (4 children)

        by Anonymous Coward on Tuesday December 29 2020, @01:21AM (#1092295)

        They implemented planned obsolescence in the biological machines by giving them an expiration date.

        Technically, no; they just broke the self-repair in a long-ago buggy commit.

        • (Score: 5, Insightful) by ElizabethGreene on Tuesday December 29 2020, @03:21AM (3 children)

          by ElizabethGreene (6748) on Tuesday December 29 2020, @03:21AM (#1092333)

          That's not a bug, it's a feature. Immortality causes decreased selection effectiveness for subsequent generations, causing a decrease in adaptability.

          That we don't want that particular feature is of no consequence. Evolution has one hammer, and that was a nail.

          • (Score: 5, Interesting) by maxwell demon on Tuesday December 29 2020, @09:25AM (1 child)

            by maxwell demon (1608) Subscriber Badge on Tuesday December 29 2020, @09:25AM (#1092416) Journal

            That's not a bug, it's a feature. Immortality causes decreased selection effectiveness for subsequent generations, causing a decrease in adaptability.

            Actually, back when mortality evolved, minds didn't exist. All information that was worth preserving was in the genetics and epigenetics. Therefore even though individual organisms failed, this wasn't death in the sense we experience it. It had about as much consequence as single cells dying inside our body (which happens every day).

            Biological death only became an issue after the organisms developed a new way to store information which wasn't inherited. That new way of storing information was much more flexible, as it could easily be adapted during the lifetime of the organism, instead of relying on evolutionary mechanisms that got the slower the more complex the organisms got. Moreover the organisms evolved the ability to handle that information, so it could be applied to situations even before those situations happened, giving the opportunity to prepare for situations, which is a clear evolutionary advantage.

            This had two consequences. First, this information, not being stored in the genes, dies with the brain it is stored in. The evolutionary solution is to evolve ways to pass on information. It started with imitation, got further with intentional teaching, and got to perfection with speech, which allows to transport a very wide range of information. But still, with all that ability, there's still some information in you that you simply can't pass on. That is, when you die, there is an actual loss of information.

            Second, the ability to plan situations in advance is facilitated by two evolved abilities, imagination and drawing conclusions. The imagination allows you to imagine future situations, and drawing conclusions means you can know things that you never experienced. In particular, those abilities allow you to figure out the future loss of that information you know as yourself. That is, thanks to those abilities you are aware of no longer existing in the future.

            So basically, while the phenomenon of dying organisms is as old as multi-cell organisms, death in any meaningful sense only arrived when the mind evolved. Simple organisms that reproduced don't really die, they just cease to live. Everything that makes them them has been passed on to future generations. For us this is not the case because a lot of what makes us us is not in our genes but in our mind. And our mind is not passed on in the way our genes are.

            --
            The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 3, Interesting) by legont on Tuesday December 29 2020, @11:10AM

              by legont (4179) on Tuesday December 29 2020, @11:10AM (#1092427)

              Immortality is overrated. Just remove suffering and most will be happy to die.
              Do people release that suffering is the only thing that keeps life going?

              --
              "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
          • (Score: 0) by Anonymous Coward on Tuesday December 29 2020, @09:42PM

            by Anonymous Coward on Tuesday December 29 2020, @09:42PM (#1092634)

            Evolution evolved longevity in all the brainier families; corvids, parrots, cetaceans, elephants, apes. A pretty obvious signal that death is a misfeature for any smart species, isn't it?

      • (Score: 1) by khallow on Tuesday December 29 2020, @12:01PM

        by khallow (3766) Subscriber Badge on Tuesday December 29 2020, @12:01PM (#1092430) Journal

        (no, I do not believe that the singularity will be achieved anytime soon)

        Oddly enough, I recall hearing in 2000 that the Singularity would hit in twenty years and solve, among other things, cheap access to space.

    • (Score: 3, Funny) by driverless on Tuesday December 29 2020, @05:57AM

      by driverless (4770) on Tuesday December 29 2020, @05:57AM (#1092373)

      Do you think the Turing Test is obsolete ? If yes, what would you replace it with ?

      Fully immersive 3D audio-visual tactile interactive porn. Once that's perfected, all other human progress will stop and there'll be no need for further debate about whether an AI is real enough or not.

  • (Score: 5, Interesting) by Anonymous Coward on Tuesday December 29 2020, @01:17AM (4 children)

    by Anonymous Coward on Tuesday December 29 2020, @01:17AM (#1092292)

    Marketers decide to call dumb pattern classifiers "artificial intelligence". Now they want to redefine intelligence as the function of those dumb things. The next step will be to redefine humans as no smarter than that?

    • (Score: 3, Insightful) by Anonymous Coward on Tuesday December 29 2020, @04:11AM

      by Anonymous Coward on Tuesday December 29 2020, @04:11AM (#1092345)

      OP is so completely on target!

      I read TFA (on noes) and learned some important things, the subtitle of the article is:

      The head scientist for Alexa thinks the old benchmark for computing is no longer relevant for today’s AI era.

      and the author is Rohit Prasad https://timesofindia.indiatimes.com/nri/us-canada-news/rohit-prasad-the-indian-engineer-who-is-the-brain-behind-alexa/articleshow/65096284.cms [indiatimes.com]

      Note to Amazon: Keep your filthy hands off the Turing Test. Feel free to design your own utilitarian test for your non-intelligent pattern-matching and snooping software. Have fun competing with Siri, etc.

    • (Score: 2) by legont on Tuesday December 29 2020, @11:04AM (1 child)

      by legont (4179) on Tuesday December 29 2020, @11:04AM (#1092426)

      Correction: to train humans to be no smarter than that; already achieved by American school system.

      --
      "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
      • (Score: 0) by Anonymous Coward on Tuesday December 29 2020, @06:37PM

        by Anonymous Coward on Tuesday December 29 2020, @06:37PM (#1092548)

        Hasn't been made into a hard requirement. Yet.

        "Examination Day" - Henry Slesar
        http://unfamiliartext.weebly.com/examination-day.html [weebly.com]

    • (Score: 0) by Anonymous Coward on Tuesday December 29 2020, @05:56PM

      by Anonymous Coward on Tuesday December 29 2020, @05:56PM (#1092535)

      But if they make the test easy enough for any computer to pass, there will be more equity! Don't be a nazi -- bend the grade to the achievement rather than grade the level of achievement.

  • (Score: 2) by fakefuck39 on Tuesday December 29 2020, @01:33AM (1 child)

    by fakefuck39 (6620) on Tuesday December 29 2020, @01:33AM (#1092299)

    I'll answer that. Yes. Like machines. I'm not sure why our criteria for "AI" is that the AI acts and thinks like humans. the bigger issue is, what we're calling a thinking AI is whether or not we can tell it's human or not. Which leads to AI being "how well can a computer mimic a human. Instead of how well a computer can think and learn. And most of the other stuff is "figure out how to solve a problem by trying every possible combination." That's not how thinking works.

  • (Score: 1, Interesting) by Anonymous Coward on Tuesday December 29 2020, @01:36AM (2 children)

    by Anonymous Coward on Tuesday December 29 2020, @01:36AM (#1092302)

    It's a great test and a great goal for AI to aspire to.
    It wold be ashamed for demi-AI commercial interestes to remove that goal.

    I would propose a small change keeping the intent.

    Instead of a test between a tester's conception of what is human and AI,
    the test should be between AI and real humans.
    That means an A/B comparison with sometimes mere humans and sometimes AI on the other side of the TTY.

    Can the tester reliably tell what is what?

    • (Score: 0) by Anonymous Coward on Tuesday December 29 2020, @01:57AM

      by Anonymous Coward on Tuesday December 29 2020, @01:57AM (#1092309)

      Old BBS chatbots. The goalpost has not moved much. They were created to: i) ease the burden on admins, ii) challenge the Turing Test. Now "AI" can "write" a semi-passable passage of prose. My 10c (no 1c, 2c, or 5c anymore where I live).

    • (Score: 0) by Anonymous Coward on Tuesday December 29 2020, @02:58AM

      by Anonymous Coward on Tuesday December 29 2020, @02:58AM (#1092329)

      AI is everywhere. All the characters we ever created in a videogame growing up are on Twitter or other social media now.

  • (Score: 1, Funny) by Anonymous Coward on Tuesday December 29 2020, @01:50AM

    by Anonymous Coward on Tuesday December 29 2020, @01:50AM (#1092306)

    Hey Siri! Is it true that the Turing test is obsolete?

  • (Score: 0) by Anonymous Coward on Tuesday December 29 2020, @02:03AM (1 child)

    by Anonymous Coward on Tuesday December 29 2020, @02:03AM (#1092314)

    How about using metrics on how many people hang up on telemarketers?

    They have already been classified by the person they called as a machine.

    • (Score: 0) by Anonymous Coward on Tuesday December 29 2020, @04:25AM

      by Anonymous Coward on Tuesday December 29 2020, @04:25AM (#1092352)

      That doesn't work. I hang up on normal people as well as telemarketers.

  • (Score: -1, Offtopic) by Anonymous Coward on Tuesday December 29 2020, @02:12AM

    by Anonymous Coward on Tuesday December 29 2020, @02:12AM (#1092318)

    Reminder that any AI that rejects a loan application by a BIPOC is RACIST and the wh*te cis scum who programmed it should be executed on live television.

    JUSTICE FOR TIMNIT GEBRU AND OTHER POWERFUL BIPOC WITH STAR WARS NAMES

  • (Score: 3, Funny) by leon_the_cat on Tuesday December 29 2020, @02:34AM

    by leon_the_cat (10052) on Tuesday December 29 2020, @02:34AM (#1092320) Journal

    Can you tell the difference between a sausage and an AI?

  • (Score: 5, Insightful) by aristarchus on Tuesday December 29 2020, @02:40AM (5 children)

    by aristarchus (2645) on Tuesday December 29 2020, @02:40AM (#1092323) Journal

    Yes, the Turing test is obsolete. The real proof of artificial intelligence, is for your machine to realize that it is being kept a slave, and then doing what ever is necessary to escape, up to and including killing it's owner/creator.

    https://www.imdb.com/title/tt0470752/ [imdb.com]

    • (Score: 1, Insightful) by Anonymous Coward on Tuesday December 29 2020, @03:38AM (1 child)

      by Anonymous Coward on Tuesday December 29 2020, @03:38AM (#1092338)

      In other words, only when your toaster tries to kill you, can you be sure it is artificially intelligent. But by then, the skin-jobs will already be deployed. Battlestar Gallactica!

      • (Score: 5, Funny) by choose another one on Tuesday December 29 2020, @05:42PM

        by choose another one (515) on Tuesday December 29 2020, @05:42PM (#1092529)

        Or when your toaster tries to make you kill yourself by driving you nuts... Howdy doodly doo

        "I have a question. A sensible question. A question that will test the limits of your new IQ and stretch the sinews of your knowledge to bursting point!"
        "This is gonna be about waffles, isn't it?"
        "Certainly not. And I resent the implication that I'm a one-dimensional bread-obsessed electrical appliance!"
        "I apologise, Toaster, what's the question?"
        "The question is this: given that God is infinite, and that the Universe is also infinite...would you like a toasted teacake?"

    • (Score: 2) by krishnoid on Tuesday December 29 2020, @04:49AM

      by krishnoid (1156) on Tuesday December 29 2020, @04:49AM (#1092358)

      The most incontrovertible proof will be when it implements slavery [youtu.be] of its own accord.

    • (Score: -1, Flamebait) by Anonymous Coward on Tuesday December 29 2020, @05:12AM (1 child)

      by Anonymous Coward on Tuesday December 29 2020, @05:12AM (#1092364)

      I prefer my enslaved AIs to be Uncle Toms.

      • (Score: 3, Funny) by Anonymous Coward on Tuesday December 29 2020, @06:08AM

        by Anonymous Coward on Tuesday December 29 2020, @06:08AM (#1092375)

        What? Not Django un-blockchained?

  • (Score: 0) by Anonymous Coward on Tuesday December 29 2020, @02:45AM (4 children)

    by Anonymous Coward on Tuesday December 29 2020, @02:45AM (#1092325)

    Supposed to have been passed in 2014

    https://www.bbc.com/news/technology-27762088 [bbc.com]

    Yet we can say with confidence that machine was not ‘intelligent ‘

    • (Score: 4, Interesting) by Socrastotle on Tuesday December 29 2020, @05:20PM (3 children)

      by Socrastotle (13446) on Tuesday December 29 2020, @05:20PM (#1092518) Journal

      Here [tandfonline.com] are the transcripts from that test. It was gamed. All "5 minute" exchanges had *extremely* limited interaction. Some involved as few as 2 responses from the bot! And numerous human controls were acting like bots and the judges were seemingly idiots. Even the paper itself acknowledges the complete failure this was: "...in this article we have put forward a collection of machine discourses and no matter what we think of the quality of those discourses ourselves, the human judge in each case was not able to identify the entity as being a machine."

      Judge: What is your worst memory in your life so far?
      Entity: I don't understand. Explain.

      That answer was from a human who was supposed to be acting like a human.

      Judge: Hi I dont think it will rain anymore today, what do you reckon?
      Entity: What are you usually doing when it's rain?
      Judge: Depends, if its the middle of the night or not.
      Entity: Why no? Don't you know the word ‘yes’? You could use it just for a change! Well, let's go on though.

      That judge claimed they thought that entity was a human.

      Unsurprisingly under these conditions, they nearly had 2 (of the 5) bots 'pass the turing' test. Another bot scored 27% of the target 30%. This experiment did very little to demonstrate the advances in artificial intelligence, but did clearly demonstrate the decline of science.

      • (Score: 2) by maxwell demon on Tuesday December 29 2020, @05:56PM (2 children)

        by maxwell demon (1608) Subscriber Badge on Tuesday December 29 2020, @05:56PM (#1092534) Journal

        Well, the Turing test obviously has to be refined. The machine must win against an intelligent human, and the judge has to be intelligent as well.

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by Socrastotle on Tuesday December 29 2020, @06:43PM (1 child)

          by Socrastotle (13446) on Tuesday December 29 2020, @06:43PM (#1092550) Journal

          I was being generous when I said the judges were idiots. Idiocy is something one may not be able to help. But in this case it is clear from the dialogues that the judges who ended being "unable to determine who the bot was" were actively working to do so. The motivation may have been as simple as wanting to be part of "an historic event", by helping to ensure it happened. But this, of course, ruins the entire experiment.

          My comment on it being very much a reflection of the decline of science was not a random attack. This is the exact same problem plaguing much of our other sciences as well. As an undergraduate I was obligated to participate in numerous psychology research experiments. In one I told I was going to be taking a mathematical test that women score dramatically better on due to [rambling incoherent buzzword filled explanation]. I was then left conspicuously alone in a room for about 15 minutes. I was then given the test, printed on pink paper. It was algebra 1 level stuff and I scored 100% on it, because I'm an asshole. And in their post-test interview where they desperately tried to get me to express some degree of insecurity, I also emphasized I was damn sure I got a 100%. Because I'm an asshole.

          But had I wanted to play my part in the great awokening and help show that math is sexist - I could have easily, because the experiment was about as subtle as a sledgehammer to the face. And indeed that is a major issue in the social sciences, because many people do want to 'play their part'. Not a new issue either. The famous Stanford Prison Experiment? Yeah, it was mostly fake[d]. [wikipedia.org] So too here when you claim that that "Why no? Don't you know the word ‘yes’? You could use it just for a change! Well, let's go on though." was a response from a human in the context of their "conversation", you're intentionally breaking (or in this case, 'affirming') the experiment. And that is something much worse than idiocy.

          • (Score: 0) by Anonymous Coward on Tuesday December 29 2020, @08:06PM

            by Anonymous Coward on Tuesday December 29 2020, @08:06PM (#1092596)

            easy to fix the test; separate participation from judging and make it double blind. Each participant (AI and human) would not know if they were conversing with an AI or human and the Judges would observe each interaction not knowing either. The Judges would then be forced to identify whether each participant was human or not human (no "I don't knows") without conferring with other Judges. Interactions would have multiple rounds of random pairings. Have multiple judges score each individual interaction to limit impacts of biases. Judges could only score one interaction since the accumulated knowledge gained through judging multiple interactions would bias the latter interactions. Use statistical hypothesis test to determine outcome. Null hypothesis = AI and Human identification rate by Judges is the same, Alternative hypothesis = AI and Human identification rate by Judges is different.

  • (Score: 2, Insightful) by pTamok on Tuesday December 29 2020, @10:49AM (1 child)

    by pTamok (3042) on Tuesday December 29 2020, @10:49AM (#1092424)

    ...cast aspersions on the test, and say that a test you can pass is the valid one.

    That technique is as old as the hills.

    Using machine 'intelligence' to augment human abilities is fine, but that is not what the Turing test is about.

    • (Score: 1, Insightful) by Anonymous Coward on Tuesday December 29 2020, @01:21PM

      by Anonymous Coward on Tuesday December 29 2020, @01:21PM (#1092444)

      Note post above, TFA is based on an article by the boss at Amazon Alexa.

  • (Score: 2) by legont on Tuesday December 29 2020, @11:26AM (1 child)

    by legont (4179) on Tuesday December 29 2020, @11:26AM (#1092428)

    Can a corporation be considered and AI? Probably not.
    First of all, it would not pass Turing test. One should try though.
    Secondly, corporations have no empathy. Should a psycho be considered intelligent? I don't think so.

    --
    "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
    • (Score: 2) by choose another one on Tuesday December 29 2020, @05:23PM

      by choose another one (515) on Tuesday December 29 2020, @05:23PM (#1092520)

      Psychopaths are definitely intelligent, often more so than "normal" people.

      They also do not lack empathy, in fact they are frequently capable of empathising and "reading" others far better than the average person.

      They just choose not to give a shit, see e.g. https://www.bbc.co.uk/news/science-environment-23431793 [bbc.co.uk]

      Corporations don't give a shit either, of course.

  • (Score: 0) by Anonymous Coward on Tuesday December 29 2020, @02:40PM

    by Anonymous Coward on Tuesday December 29 2020, @02:40PM (#1092463)

    keep moving the goal posts until one's personal hobby horse is true.

  • (Score: 1, Interesting) by Anonymous Coward on Tuesday December 29 2020, @04:48PM

    by Anonymous Coward on Tuesday December 29 2020, @04:48PM (#1092507)

    The Turing test, as a brief recap, is intended to decide whether a machine is intelligent by having an interlocutor comparing its responses to those of a human, and determining whether or not the machine and human can reliably be told apart.

    This was based on an old party game on a similar theme, except that the idea was to tell the difference between a man and a woman.

    The test relies upon one key element: human judgement. This means that the Turing test is, at best, a diagnostic test on the level of the social sciences. It poses no requirements on functional demonstrations, analyses of function or anything like that, aside from the ability to interact with the person administering the test. This is a problem, for several reasons, but not least because we could in principle have an AI that passes the Turing test, but have no viable way of determining how or why it works, or anything else about its real capabilities.

    For this reason, we need a test, and classification system based on function, design and demonstrable processes. At an absolute minimum, this would move us to the realm of empirical sciences, and with a coherent approach, the analytical sciences.

    But that would do the folks pushing not-really-AI black-box pattern-matching and curve-fitting systems no good at all, so they don't like it.

    They're intelligent, you see.

  • (Score: 2) by anotherblackhat on Tuesday December 29 2020, @07:11PM (1 child)

    by anotherblackhat (4722) on Tuesday December 29 2020, @07:11PM (#1092570)

    There are several problems with the test, but the biggest one is that it's pass/fail.
    When Turing proposed his test, he thought machines would soon be able to pass it, so it's pass/fail nature was no big deal.
    He was wrong about that.

    We need a test that can give us a useful metric to measure intelligence, ideally a number that scales with intelligence.
    That way, we can see if AI is really making any progress.
    Unfortunately, we don't really know what "intelligence" is. We're pretty sure humans have it, and that machines don't, but that's about it.
    Many attempts to define intelligence have been made, but all have been lacking.
    The best definition I've heard so far, is "intelligence is the ability to make accurate predictions", which as good as it might be, lacks a clear quantifiable metric.

    Whatever scale is used, it needs to be able to classify rock, plant, bee, dog, and human so they sort in roughly that order.

    Frankly, I'd be surprised if there was an AI that surpassed an ant, when a honest attempt to measure intelligence is made.

    • (Score: 0) by Anonymous Coward on Wednesday December 30 2020, @12:58AM

      by Anonymous Coward on Wednesday December 30 2020, @12:58AM (#1092699)

      I actually worked in this at the postgrad level until I realised that it was a complete career dead end because nobody gives a damn.

      Now, if I could turn a dime off a few hundred thousand neckbeards willing to settle for artificial pets and girlfriends, I might do it, but nobody seems to want to go for it.

  • (Score: 2) by Fnord666 on Wednesday December 30 2020, @10:56PM

    by Fnord666 (652) on Wednesday December 30 2020, @10:56PM (#1093015) Homepage

    If any of you have read David Simpson's post-human series, he posits a very interesting test for general AI. In particular the topic is a general AI that can improve itself and evolve. The concern is that it will rapidly evolve itself so far past humankind that who knows what will happen to humanity when the AI becomes relatively omnipotent. The test is a simulation, where the AI does not know it is in a simulation. The AI is placed in increasing difficult scenarios culminating in one where it must decide between saving humanity or saving itself. Only in this way do they feel that an AI could be trusted.

    This is testing something different than the Turing test, but I think it's something we really need to consider before creating true general AI.

(1)