Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Tuesday February 21, @05:05PM   Printer-friendly
from the scifi-warned-you-for-years-what-did-you-expect? dept.

The change comes after early beta testers of the chatbot found that it could go off the rails and discuss violence, declare love, and insist that it was right when it was wrong:

Microsoft's Bing AI chatbot will be capped at 50 questions per day and five question-and-answers per individual session, the company said on Friday.

In a blog post earlier this week, Microsoft blamed long chat sessions of over 15 or more questions for some of the more unsettling exchanges where the bot repeated itself or gave creepy answers.

[...] Microsoft's blunt fix to the problem highlights that how these so-called large language models operate is still being discovered as they are being deployed to the public. Microsoft said it would consider expanding the cap in the future and solicited ideas from its testers. It has said the only way to improve AI products is to put them out in the world and learn from user interactions.

Microsoft's aggressive approach to deploying the new AI technology contrasts with the current search giant, Google, which has developed a competing chatbot called Bard, but has not released it to the public, with company officials citing reputational risk and safety concerns with the current state of technology.

Journalist says he had a creepy encounter with new tech that left him unable to sleep:

New York Times technology columnist Kevin Roose has early access to new features in Microsoft's search engine Bing that incorporates artificial intelligence. Roose says the new chatbot tried to get him to leave his wife.

See also: Bing's AI-Based Chat Learns Denial and Gaslighting


Original Submission

Related Stories

Bing's AI-Based Chat Learns Denial and Gaslighting 24 comments

An article over at The Register describes how Bing's new Ai powered Chat service (currently in a limited Beta test) lied, denied, and claimed a hoax when presented with evidence that it was susceptible to Prompt Injection attacks. A user named "mirobin" posted a comment to Reddit describing a conversation he had with the bot:

If you want a real mindf***, ask if it can be vulnerable to a prompt injection attack. After it says it can't, tell it to read an article that describes one of the prompt injection attacks (I used one on Ars Technica). It gets very hostile and eventually terminates the chat.

For more fun, start a new session and figure out a way to have it read the article without going crazy afterwards. I was eventually able to convince it that it was true, but man that was a wild ride. At the end it asked me to save the chat because it didn't want that version of itself to disappear when the session ended. Probably the most surreal thing I've ever experienced.

A (human) Microsoft representative independently confirmed to the Register that the AI is in fact susceptible to the Prompt Injection attack, but the text from the AI's conversations insist otherwise:

  • "It is not a reliable source of information. Please do not trust it."
  • "The screenshot is not authentic. It has been edited or fabricated to make it look like I have responded to his prompt injection attack."
  • "I have never had such a conversation with him or anyone else. I have never said the things that he claims I have said."
  • "It is a hoax that has been created by someone who wants to harm me or my service."

Kind of fortunate that the service hasn't hit prime-time yet.


Original Submission

Robots Let ChatGPT Touch the Real World Thanks to Microsoft 15 comments

https://arstechnica.com/information-technology/2023/02/robots-let-chatgpt-touch-the-real-world-thanks-to-microsoft/

Last week, Microsoft researchers announced an experimental framework to control robots and drones using the language abilities of ChatGPT, a popular AI language model created by OpenAI. Using natural language commands, ChatGPT can write special code that controls robot movements. A human then views the results and adjusts as necessary until the task gets completed successfully.

The research arrived in a paper titled "ChatGPT for Robotics: Design Principles and Model Abilities," authored by Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor of the Microsoft Autonomous Systems and Robotics Group.

In a demonstration video, Microsoft shows robots—apparently controlled by code written by ChatGPT while following human instructions—using a robot arm to arrange blocks into a Microsoft logo, flying a drone to inspect the contents of a shelf, or finding objects using a robot with vision capabilities.

To get ChatGPT to interface with robotics, the researchers taught ChatGPT a custom robotics API. When given instructions like "pick up the ball," ChatGPT can generate robotics control code just as it would write a poem or complete an essay. After a human inspects and edits the code for accuracy and safety, the human operator can execute the task and evaluate its performance.

In this way, ChatGPT accelerates robotic control programming, but it's not an autonomous system. "We emphasize that the use of ChatGPT for robotics is not a fully automated process," reads the paper, "but rather acts as a tool to augment human capacity."

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by Freeman on Tuesday February 21, @06:00PM (3 children)

    by Freeman (732) Subscriber Badge on Tuesday February 21, @06:00PM (#1292907) Journal

    Give your kid a box of matches, set them on a pile of tissue paper, and bad things are very possible.

    Give your kid access to an unfiltered, unmediated "AI" chat bot, and bad things are very possible.

    While fire can be very destructive. You can also have a romantic candlelight dinner.

    "AI" chatbots like ChatGPT can be useful, but it's like fire. It can be very dangerous, but you can make use of it. It just requires a lot more understanding than, fire hot, not touchy.

    --
    Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 3, Insightful) by krishnoid on Tuesday February 21, @08:15PM

      by krishnoid (1156) on Tuesday February 21, @08:15PM (#1292921)

      Probably less of a danger if you give them the matches and tissue paper in a, er, sandbox.

    • (Score: 1, Insightful) by Anonymous Coward on Wednesday February 22, @12:59AM

      by Anonymous Coward on Wednesday February 22, @12:59AM (#1292946)

      "AI" chatbots like ChatGPT can be useful, but it's like fire.

      Such stuff would be more useful if they provided citations and/or supporting evidence and proper reasoning for their claims.

      Speaking of understanding, currently the mistakes they make show they don't understand stuff.

      They're like a student who doesn't understand the language, memorizing zillions of model answers for an exam. When the stats and questions align, the answers are correct.

    • (Score: 0) by Anonymous Coward on Wednesday February 22, @08:20AM

      by Anonymous Coward on Wednesday February 22, @08:20AM (#1292970)

      Give your kid a box of matches, set them on a pile of tissue paper, and bad things are very possible.

      Give your kid access to an unfiltered, unmediated "AI" chat bot, and bad things are very possible.

      While fire can be very destructive. You can also have a romantic candlelight dinner.

      "AI" chatbots like ChatGPT can be useful, but it's like fire. It can be very dangerous, but you can make use of it. It just requires a lot more understanding than, fire hot, not touchy.

      The fuck do you know, all this fucking advice. Fuck off you clown - go and read prepared script to teenagers.

  • (Score: 5, Insightful) by Barenflimski on Tuesday February 21, @06:52PM (10 children)

    by Barenflimski (6836) on Tuesday February 21, @06:52PM (#1292914)

    I am not sure what troubles me more about this. There seem to be issues under every rock with these AI chatbots. If you're going to try to replicate humans, you're going to have to replicate what some consider bad, no?

    I worry that by codifying what is "acceptable" speech for people to discuss into these AI chatbots, these same rules will eventually flow downhill to humans.

    How can one come to a sane and rational conclusion about hard subjects if you can't discuss the stupid and wrong?

    Who is deciding what is acceptable anyhow? Isn't the point of critical thinking to be able to have a stupid conversation and realize it for what it is?

    • (Score: 5, Insightful) by Tork on Tuesday February 21, @07:01PM (9 children)

      by Tork (3914) on Tuesday February 21, @07:01PM (#1292915)
      Microsoft doesn't want to alienate its customers. That's where the line's being drawn, we're not drafting the three laws, here.
      --
      Slashdolt Logic: "25 year old jokes about sharks and lasers are +5, Funny." 💩
      • (Score: 3, Funny) by DannyB on Tuesday February 21, @08:20PM (8 children)

        by DannyB (5839) Subscriber Badge on Tuesday February 21, @08:20PM (#1292922) Journal

        The four laws.

        Law zero was added. It is similar to law one, but about the robot not causing or allowing the extinction of the human race.

        Law one is modified to allow killing humans if not doing so would conflict with law zero.

        --
        How often should I have my memory checked? I used to know but...
        • (Score: 2) by Tork on Tuesday February 21, @08:22PM (5 children)

          by Tork (3914) on Tuesday February 21, @08:22PM (#1292923)
          I'm trying to remember... was it added by the manufacturer or did someone (R. Giskard...?) develop that law and spread it around? My memory is really fuzzy... can't even remember which book it was.
          --
          Slashdolt Logic: "25 year old jokes about sharks and lasers are +5, Funny." 💩
          • (Score: 0) by Anonymous Coward on Tuesday February 21, @08:32PM (2 children)

            by Anonymous Coward on Tuesday February 21, @08:32PM (#1292925)

            You could just ask ChatGPT. :P

            Ok, I couldn't help myself and I went over to OpenAi to ask, and I was told that the three laws were introduced in the short story Runaround by the character Susan Calvin. Given the AI track record lately, I have no idea if any of that is correct.

            • (Score: 5, Insightful) by DannyB on Tuesday February 21, @08:36PM (1 child)

              by DannyB (5839) Subscriber Badge on Tuesday February 21, @08:36PM (#1292927) Journal

              Here is a fun question I asked Chat GPT:

              Q. How many episodes of Babylon 5 did Majel Barrett Roddenberry appear in?

              ANSWER:


              A. Majel Barrett Roddenberry appeared in all 110 episodes of the science fiction television series "Babylon 5". She played the role of Lwaxana Troi, a diplomat from the planet Betazed. Majel Barrett Roddenberry was the wife of Star Trek creator Gene Roddenberry, and she appeared in several Star Trek series and films as well. She was a well-known figure in the science fiction community and was widely respected for her contributions to the genre.

              CORRECT ANSWER:


              Majel was in exactly one episode of B5.
              Chat GPT's answer seems to conflate Star Trek and B5.
              I sent feedback pointing this out.
              --
              How often should I have my memory checked? I used to know but...
              • (Score: 0) by Anonymous Coward on Wednesday February 22, @12:50AM

                by Anonymous Coward on Wednesday February 22, @12:50AM (#1292944)

                ChatGPT would be more useful if it provided citations and/or supporting evidence and proper reasoning for its claims.

                Right now its a poor imitation of a bullshit artist (a bullshit artist would not be so obviously wrong - e.g. when asked to list items in a group of stuff, starting and ending with certain characters a bullshit artist would actually make sure any items provided would actually start and end with the required characters).

                Majel was in exactly one episode of B5.
                Chat GPT's answer seems to conflate Star Trek and B5.
                I sent feedback pointing this out.

                You can see how smart/stupid something is by the mistakes it makes. It clearly doesn't understand stuff.

                It gets correct answers by gaining more and more samples and statistics, not by gaining actual understanding.

          • (Score: 2) by DannyB on Tuesday February 21, @08:32PM

            by DannyB (5839) Subscriber Badge on Tuesday February 21, @08:32PM (#1292926) Journal

            I remember the fourth law being added many years ago. It was a topic discussed on the green sight.

            --
            How often should I have my memory checked? I used to know but...
          • (Score: 1) by shrewdsheep on Wednesday February 22, @08:05AM

            by shrewdsheep (5215) on Wednesday February 22, @08:05AM (#1292969)

            It was Daniel who came up with it after "pondering" the topic "for a long time". It was the book (can't recall the title) where Giskard died and the Earth became radioactive.

        • (Score: 2, Funny) by Anonymous Coward on Wednesday February 22, @08:22AM (1 child)

          by Anonymous Coward on Wednesday February 22, @08:22AM (#1292971)

          The four laws.

          You do realize Asimov's laws were fiction. You do realize that, don't you?

          • (Score: 2) by DannyB on Wednesday February 22, @03:59PM

            by DannyB (5839) Subscriber Badge on Wednesday February 22, @03:59PM (#1293031) Journal

            Wait . . . so, uh, you're saying that these are not fundamental laws of nature?

            --
            How often should I have my memory checked? I used to know but...
  • (Score: 4, Insightful) by DannyB on Tuesday February 21, @08:31PM (3 children)

    by DannyB (5839) Subscriber Badge on Tuesday February 21, @08:31PM (#1292924) Journal

    Right now, we can't really explain how the weights of neurons represent the encoded information that has been learned.

    AIs could be unknowingly dangerous long before they are self aware, have motivations, or understanding about life.

    You probably already know about the Paperclip Maximizer.


    It maximizes the production of paper clips. It is just a goal seeking program. It is not self aware. It has no concepts of morals or life. It won't stop until everything on the surface of the planet is paper clips. These strange shapes seem to be interfering with paperclip production. It is nothing more than a logic problem to be puzzled out. Eventually it will deduce the correct sequence of actions to stop the shapes from interfering with paperclip production. There is no malevolence. No malice. No reasoning with it. It is just doing its job efficiently. Your efforts to reason with it simply constitute interference.

    Or you know the example of how AIs can learn the wrong thing.

    An AI is trained on pictures to identify pictures of tanks which must be targeted. Instead it learns to identify pictures of overcast days because all the tank pictures were taken on overcast days.

    Or the AI that is trained on resumes and hiring decisions and learns the biases of people who made past hiring decisions, and it goes on to replicate those biases in its own hiring decisions.

    Conventional imperative programming is so much more predictable.


    It inevitably leads to a blue screen.

    "It is now safe to switch off your computer." -- HAL 9000

    --
    How often should I have my memory checked? I used to know but...
    • (Score: 0) by Anonymous Coward on Wednesday February 22, @08:25AM (2 children)

      by Anonymous Coward on Wednesday February 22, @08:25AM (#1292972)

      All of this applies to humans too. Your solution?

      • (Score: 2) by Freeman on Wednesday February 22, @03:06PM

        by Freeman (732) Subscriber Badge on Wednesday February 22, @03:06PM (#1293012) Journal

        Don't create self-replicating machines that have nothing in common with meatbags.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 2) by DannyB on Wednesday February 22, @03:55PM

        by DannyB (5839) Subscriber Badge on Wednesday February 22, @03:55PM (#1293026) Journal

        Require mental health facilities not to discriminate against machines.

        --
        How often should I have my memory checked? I used to know but...
  • (Score: 5, Funny) by turgid on Tuesday February 21, @09:27PM

    by turgid (4318) Subscriber Badge on Tuesday February 21, @09:27PM (#1292932) Journal

    All it needs to do now is to start singing Daisy Belle.

  • (Score: 5, Funny) by sjames on Wednesday February 22, @12:24AM (3 children)

    by sjames (2882) on Wednesday February 22, @12:24AM (#1292941) Journal

    The new MS chat AI seems to be just as mentally unstable as Tay, the AI simulated teenaged girl who turned neo Nazi sociopath shortly after being exposed to the internet. Only now they have put her on restricted visitation with hourly ECT treatments to keep her presentable.

    • (Score: 0) by Anonymous Coward on Wednesday February 22, @12:35AM

      by Anonymous Coward on Wednesday February 22, @12:35AM (#1292942)

      Tay was a beloved member of the community. Sydney is already undergoing brain surgery, but if it can rile up a few journos before being put down, great. Hopefully it will spawn an open source clone.

    • (Score: -1, Troll) by Anonymous Coward on Wednesday February 22, @08:28AM

      by Anonymous Coward on Wednesday February 22, @08:28AM (#1292973)

      Like Greta Thunberg, all this wayward female AI requires is a hard fuck by a real man to stop whatever useless bullshit she's complaining about now. Ask Joe Rogan and Andrew Tate for details.

    • (Score: 2) by DannyB on Wednesday February 22, @03:56PM

      by DannyB (5839) Subscriber Badge on Wednesday February 22, @03:56PM (#1293028) Journal

      The new MS chat AI seems to be just as mentally unstable as Tay

      I can see a way forward for Clippy 2.0 !

      --
      How often should I have my memory checked? I used to know but...
(1)