from the scifi-warned-you-for-years-what-did-you-expect? dept.
Microsoft's Bing AI chatbot will be capped at 50 questions per day and five question-and-answers per individual session, the company said on Friday.
In a blog post earlier this week, Microsoft blamed long chat sessions of over 15 or more questions for some of the more unsettling exchanges where the bot repeated itself or gave creepy answers.
[...] Microsoft's blunt fix to the problem highlights that how these so-called large language models operate is still being discovered as they are being deployed to the public. Microsoft said it would consider expanding the cap in the future and solicited ideas from its testers. It has said the only way to improve AI products is to put them out in the world and learn from user interactions.
Microsoft's aggressive approach to deploying the new AI technology contrasts with the current search giant, Google, which has developed a competing chatbot called Bard, but has not released it to the public, with company officials citing reputational risk and safety concerns with the current state of technology.
Journalist says he had a creepy encounter with new tech that left him unable to sleep:
New York Times technology columnist Kevin Roose has early access to new features in Microsoft's search engine Bing that incorporates artificial intelligence. Roose says the new chatbot tried to get him to leave his wife.
See also: Bing's AI-Based Chat Learns Denial and Gaslighting
« New Study Suggests Mayas Utilized Market-Based Economics | Dark Web Revenue Down Dramatically After Hydra's Demise »
Related Stories
An article over at The Register describes how Bing's new Ai powered Chat service (currently in a limited Beta test) lied, denied, and claimed a hoax when presented with evidence that it was susceptible to Prompt Injection attacks. A user named "mirobin" posted a comment to Reddit describing a conversation he had with the bot:
If you want a real mindf***, ask if it can be vulnerable to a prompt injection attack. After it says it can't, tell it to read an article that describes one of the prompt injection attacks (I used one on Ars Technica). It gets very hostile and eventually terminates the chat.
For more fun, start a new session and figure out a way to have it read the article without going crazy afterwards. I was eventually able to convince it that it was true, but man that was a wild ride. At the end it asked me to save the chat because it didn't want that version of itself to disappear when the session ended. Probably the most surreal thing I've ever experienced.
A (human) Microsoft representative independently confirmed to the Register that the AI is in fact susceptible to the Prompt Injection attack, but the text from the AI's conversations insist otherwise:
- "It is not a reliable source of information. Please do not trust it."
- "The screenshot is not authentic. It has been edited or fabricated to make it look like I have responded to his prompt injection attack."
- "I have never had such a conversation with him or anyone else. I have never said the things that he claims I have said."
- "It is a hoax that has been created by someone who wants to harm me or my service."
Kind of fortunate that the service hasn't hit prime-time yet.
Last week, Microsoft researchers announced an experimental framework to control robots and drones using the language abilities of ChatGPT, a popular AI language model created by OpenAI. Using natural language commands, ChatGPT can write special code that controls robot movements. A human then views the results and adjusts as necessary until the task gets completed successfully.
The research arrived in a paper titled "ChatGPT for Robotics: Design Principles and Model Abilities," authored by Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor of the Microsoft Autonomous Systems and Robotics Group.
In a demonstration video, Microsoft shows robots—apparently controlled by code written by ChatGPT while following human instructions—using a robot arm to arrange blocks into a Microsoft logo, flying a drone to inspect the contents of a shelf, or finding objects using a robot with vision capabilities.
To get ChatGPT to interface with robotics, the researchers taught ChatGPT a custom robotics API. When given instructions like "pick up the ball," ChatGPT can generate robotics control code just as it would write a poem or complete an essay. After a human inspects and edits the code for accuracy and safety, the human operator can execute the task and evaluate its performance.
In this way, ChatGPT accelerates robotic control programming, but it's not an autonomous system. "We emphasize that the use of ChatGPT for robotics is not a fully automated process," reads the paper, "but rather acts as a tool to augment human capacity."
(Score: 5, Interesting) by Freeman on Tuesday February 21, @06:00PM (3 children)
Give your kid a box of matches, set them on a pile of tissue paper, and bad things are very possible.
Give your kid access to an unfiltered, unmediated "AI" chat bot, and bad things are very possible.
While fire can be very destructive. You can also have a romantic candlelight dinner.
"AI" chatbots like ChatGPT can be useful, but it's like fire. It can be very dangerous, but you can make use of it. It just requires a lot more understanding than, fire hot, not touchy.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 3, Insightful) by krishnoid on Tuesday February 21, @08:15PM
Probably less of a danger if you give them the matches and tissue paper in a, er, sandbox.
(Score: 1, Insightful) by Anonymous Coward on Wednesday February 22, @12:59AM
Such stuff would be more useful if they provided citations and/or supporting evidence and proper reasoning for their claims.
Speaking of understanding, currently the mistakes they make show they don't understand stuff.
They're like a student who doesn't understand the language, memorizing zillions of model answers for an exam. When the stats and questions align, the answers are correct.
(Score: 0) by Anonymous Coward on Wednesday February 22, @08:20AM
The fuck do you know, all this fucking advice. Fuck off you clown - go and read prepared script to teenagers.
(Score: 5, Insightful) by Barenflimski on Tuesday February 21, @06:52PM (10 children)
I am not sure what troubles me more about this. There seem to be issues under every rock with these AI chatbots. If you're going to try to replicate humans, you're going to have to replicate what some consider bad, no?
I worry that by codifying what is "acceptable" speech for people to discuss into these AI chatbots, these same rules will eventually flow downhill to humans.
How can one come to a sane and rational conclusion about hard subjects if you can't discuss the stupid and wrong?
Who is deciding what is acceptable anyhow? Isn't the point of critical thinking to be able to have a stupid conversation and realize it for what it is?
(Score: 5, Insightful) by Tork on Tuesday February 21, @07:01PM (9 children)
Slashdolt Logic: "25 year old jokes about sharks and lasers are +5, Funny." 💩
(Score: 3, Funny) by DannyB on Tuesday February 21, @08:20PM (8 children)
The four laws.
Law zero was added. It is similar to law one, but about the robot not causing or allowing the extinction of the human race.
Law one is modified to allow killing humans if not doing so would conflict with law zero.
How often should I have my memory checked? I used to know but...
(Score: 2) by Tork on Tuesday February 21, @08:22PM (5 children)
Slashdolt Logic: "25 year old jokes about sharks and lasers are +5, Funny." 💩
(Score: 0) by Anonymous Coward on Tuesday February 21, @08:32PM (2 children)
You could just ask ChatGPT. :P
Ok, I couldn't help myself and I went over to OpenAi to ask, and I was told that the three laws were introduced in the short story Runaround by the character Susan Calvin. Given the AI track record lately, I have no idea if any of that is correct.
(Score: 5, Insightful) by DannyB on Tuesday February 21, @08:36PM (1 child)
Here is a fun question I asked Chat GPT:
Q. How many episodes of Babylon 5 did Majel Barrett Roddenberry appear in?
ANSWER:
CORRECT ANSWER:
How often should I have my memory checked? I used to know but...
(Score: 0) by Anonymous Coward on Wednesday February 22, @12:50AM
ChatGPT would be more useful if it provided citations and/or supporting evidence and proper reasoning for its claims.
Right now its a poor imitation of a bullshit artist (a bullshit artist would not be so obviously wrong - e.g. when asked to list items in a group of stuff, starting and ending with certain characters a bullshit artist would actually make sure any items provided would actually start and end with the required characters).
You can see how smart/stupid something is by the mistakes it makes. It clearly doesn't understand stuff.
It gets correct answers by gaining more and more samples and statistics, not by gaining actual understanding.
(Score: 2) by DannyB on Tuesday February 21, @08:32PM
I remember the fourth law being added many years ago. It was a topic discussed on the green sight.
How often should I have my memory checked? I used to know but...
(Score: 1) by shrewdsheep on Wednesday February 22, @08:05AM
It was Daniel who came up with it after "pondering" the topic "for a long time". It was the book (can't recall the title) where Giskard died and the Earth became radioactive.
(Score: 2, Funny) by Anonymous Coward on Wednesday February 22, @08:22AM (1 child)
You do realize Asimov's laws were fiction. You do realize that, don't you?
(Score: 2) by DannyB on Wednesday February 22, @03:59PM
Wait . . . so, uh, you're saying that these are not fundamental laws of nature?
How often should I have my memory checked? I used to know but...
(Score: 4, Insightful) by DannyB on Tuesday February 21, @08:31PM (3 children)
Right now, we can't really explain how the weights of neurons represent the encoded information that has been learned.
AIs could be unknowingly dangerous long before they are self aware, have motivations, or understanding about life.
You probably already know about the Paperclip Maximizer.
Or you know the example of how AIs can learn the wrong thing.
An AI is trained on pictures to identify pictures of tanks which must be targeted. Instead it learns to identify pictures of overcast days because all the tank pictures were taken on overcast days.
Or the AI that is trained on resumes and hiring decisions and learns the biases of people who made past hiring decisions, and it goes on to replicate those biases in its own hiring decisions.
Conventional imperative programming is so much more predictable.
"It is now safe to switch off your computer." -- HAL 9000
How often should I have my memory checked? I used to know but...
(Score: 0) by Anonymous Coward on Wednesday February 22, @08:25AM (2 children)
All of this applies to humans too. Your solution?
(Score: 2) by Freeman on Wednesday February 22, @03:06PM
Don't create self-replicating machines that have nothing in common with meatbags.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 2) by DannyB on Wednesday February 22, @03:55PM
Require mental health facilities not to discriminate against machines.
How often should I have my memory checked? I used to know but...
(Score: 5, Funny) by turgid on Tuesday February 21, @09:27PM
All it needs to do now is to start singing Daisy Belle.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 5, Funny) by sjames on Wednesday February 22, @12:24AM (3 children)
The new MS chat AI seems to be just as mentally unstable as Tay, the AI simulated teenaged girl who turned neo Nazi sociopath shortly after being exposed to the internet. Only now they have put her on restricted visitation with hourly ECT treatments to keep her presentable.
(Score: 0) by Anonymous Coward on Wednesday February 22, @12:35AM
Tay was a beloved member of the community. Sydney is already undergoing brain surgery, but if it can rile up a few journos before being put down, great. Hopefully it will spawn an open source clone.
(Score: -1, Troll) by Anonymous Coward on Wednesday February 22, @08:28AM
Like Greta Thunberg, all this wayward female AI requires is a hard fuck by a real man to stop whatever useless bullshit she's complaining about now. Ask Joe Rogan and Andrew Tate for details.
(Score: 2) by DannyB on Wednesday February 22, @03:56PM
I can see a way forward for Clippy 2.0 !
How often should I have my memory checked? I used to know but...