Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Wednesday January 18 2023, @02:40PM   Printer-friendly

Controversy erupts over non-consensual AI mental health experiment:

On Friday, Koko co-founder Rob Morris announced on Twitter that his company ran an experiment to provide AI-written mental health counseling for 4,000 people without informing them first, The Verge reports. Critics have called the experiment deeply unethical because Koko did not obtain informed consent from people seeking counseling.

Koko is a nonprofit mental health platform that connects teens and adults who need mental health help to volunteers through messaging apps like Telegram and Discord.

On Discord, users sign into the Koko Cares server and send direct messages to a Koko bot that asks several multiple-choice questions (e.g., "What's the darkest thought you have about this?"). It then shares a person's concerns—written as a few sentences of text—anonymously with someone else on the server who can reply anonymously with a short message of their own.

During the AI experiment—which applied to about 30,000 messages, according to Morris—volunteers providing assistance to others had the option to use a response automatically generated by OpenAI's GPT-3 large language model instead of writing one themselves (GPT-3 is the technology behind the recently popular ChatGPT chatbot).


Original Submission

Related Stories

Netflix Stirs Fears by Using AI-Assisted Background Art in Short Anime Film 15 comments

https://arstechnica.com/information-technology/2023/02/netflix-taps-ai-image-synthesis-for-background-art-in-the-dog-and-the-boy/

Over the past year, generative AI has kicked off a wave of existential dread over potential machine-fueled job loss not seen since the advent of the industrial revolution. On Tuesday, Netflix reinvigorated that fear when it debuted a short film called Dog and Boy that utilizes AI image synthesis to help generate its background artwork.

Directed by Ryotaro Makihara, the three-minute animated short follows the story of a boy and his robotic dog through cheerful times, although the story soon takes a dramatic turn toward the post-apocalyptic. Along the way, it includes lush backgrounds apparently created as a collaboration between man and machine, credited to "AI (+Human)" in the end credit sequence.

[...] Netflix and the production company WIT Studio tapped Japanese AI firm Rinna for assistance with generating the images. They did not announce exactly what type of technology Rinna used to generate the artwork, but the process looks similar to a Stable Diffusion-powered "img2img" process than can take an image and transform it based on a written prompt.

Related:
ChatGPT Can't be Credited as an Author, Says World's Largest Academic Publisher
90% of Online Content Could be 'Generated by AI by 2025,' Expert Says
Getty Images Targets AI Firm For 'Copying' Photos
Controversy Erupts Over Non-consensual AI Mental Health Experiment
Microsoft's New AI Can Simulate Anyone's Voice With Three Seconds of Audio
AI Everything, Everywhere
Microsoft, GitHub, and OpenAI Sued for $9B in Damages Over Piracy
Adobe Stock Begins Selling AI-Generated Artwork
AI Systems Can't Patent Inventions, US Federal Circuit Court Confirms


Original Submission

Robots Let ChatGPT Touch the Real World Thanks to Microsoft 15 comments

https://arstechnica.com/information-technology/2023/02/robots-let-chatgpt-touch-the-real-world-thanks-to-microsoft/

Last week, Microsoft researchers announced an experimental framework to control robots and drones using the language abilities of ChatGPT, a popular AI language model created by OpenAI. Using natural language commands, ChatGPT can write special code that controls robot movements. A human then views the results and adjusts as necessary until the task gets completed successfully.

The research arrived in a paper titled "ChatGPT for Robotics: Design Principles and Model Abilities," authored by Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor of the Microsoft Autonomous Systems and Robotics Group.

In a demonstration video, Microsoft shows robots—apparently controlled by code written by ChatGPT while following human instructions—using a robot arm to arrange blocks into a Microsoft logo, flying a drone to inspect the contents of a shelf, or finding objects using a robot with vision capabilities.

To get ChatGPT to interface with robotics, the researchers taught ChatGPT a custom robotics API. When given instructions like "pick up the ball," ChatGPT can generate robotics control code just as it would write a poem or complete an essay. After a human inspects and edits the code for accuracy and safety, the human operator can execute the task and evaluate its performance.

In this way, ChatGPT accelerates robotic control programming, but it's not an autonomous system. "We emphasize that the use of ChatGPT for robotics is not a fully automated process," reads the paper, "but rather acts as a tool to augment human capacity."

This discussion was created by mrpg (5708) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0, Troll) by Anonymous Coward on Wednesday January 18 2023, @03:22PM (13 children)

    by Anonymous Coward on Wednesday January 18 2023, @03:22PM (#1287370)

    Non-consensual mental health experiment? Did they mind-rape them or what?

    • (Score: -1, Troll) by GloomMower on Wednesday January 18 2023, @03:30PM

      by GloomMower (17961) on Wednesday January 18 2023, @03:30PM (#1287371)

      Have to go through ethics board to ask people if they like nachos.

    • (Score: 5, Informative) by ikanreed on Wednesday January 18 2023, @03:51PM (4 children)

      by ikanreed (3164) Subscriber Badge on Wednesday January 18 2023, @03:51PM (#1287374) Journal

      It's a violation of the standards laid out by the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use in the wake of horrendously unethical human experiments such as the Tuskegee Syphilis Experiment or the human experimentation by the nazis on concentration camp prisoners.

      One of the key precepts of the ICH was informed consent, where any patient of any doctor who was subject to any experimental treatment was to be made aware of the what the treatment entails(whether they were in the control group or not), the right to remove yourself from the experiment, and the explicit signing of consent that recognizes those two facts.

      Here the experiment designers thought that psychological care somehow doesn't count as medical treatment, and therefor they didn't need to engage in good clinical practice.

      To take more seriously the risks involved: frequently psychological care dovetails into psychiatric care in ways that can be directly relevant to patient health. For example, a trained psychologist may recognize dangerous side-effects associated with anti-depressants vis-a-vis suicidal tendencies. Certain patients with particularly challenging disorders may dependent on stable, reliable care to manage their mental illness(phobias and extreme anxiety disorders come to mind) and keep a stable life.

      • (Score: 1, Disagree) by GloomMower on Wednesday January 18 2023, @06:51PM (3 children)

        by GloomMower (17961) on Wednesday January 18 2023, @06:51PM (#1287406)

        The volunteers still read/edit/approve what the ai responded with before sending. Seems like it is just another tool. Would it be a violation if they switched spell checkers, or if they looked into if volunteers using grammerly or not changed outcomes?

        • (Score: 3, Insightful) by tangomargarine on Wednesday January 18 2023, @07:20PM (2 children)

          by tangomargarine (667) on Wednesday January 18 2023, @07:20PM (#1287416)

          During the AI experiment—which applied to about 30,000 messages, according to Morris—volunteers providing assistance to others had the option to use a response automatically generated by OpenAI's GPT-3 large language model instead of writing one themselves (GPT-3 is the technology behind the recently popular ChatGPT chatbot).

          The volunteers still read/edit/approve what the ai responded with before sending.

          I'm filing this under the "people get conditioned to ignore popup boxes" thing in unintended consequences.

          If your job is to provide counseling responses, but you can press a button to generate one automatically, and nobody cares which you do, logically you should always just press the button because it's less work. Then the person on the other end is getting counseling from an AI, which they never agreed to. The advice itself may be bad, too, but even if it's fine, it's still unethical.

          --
          "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
          • (Score: 1) by GloomMower on Wednesday January 18 2023, @09:14PM (1 child)

            by GloomMower (17961) on Wednesday January 18 2023, @09:14PM (#1287428)

            Does it matter that they were volunteers?

            • (Score: 3, Insightful) by tangomargarine on Wednesday January 18 2023, @09:55PM

              by tangomargarine (667) on Wednesday January 18 2023, @09:55PM (#1287439)

              No?

              I mean, yeah, from a legal standpoint it might matter in regards to their malpractice insurance or lack thereof, but that's not particularly relevant to the issue.

              --
              "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
    • (Score: 4, Insightful) by tangomargarine on Wednesday January 18 2023, @07:15PM (3 children)

      by tangomargarine (667) on Wednesday January 18 2023, @07:15PM (#1287412)

      It's analogous to getting major surgery, and finding out after the fact that the dude who operated on you wasn't a trained surgeon. Regardless of how the actual treatment turned out, it was unethical to not inform the patient what was going on.

      --
      "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
      • (Score: 1) by GloomMower on Wednesday January 18 2023, @08:56PM (1 child)

        by GloomMower (17961) on Wednesday January 18 2023, @08:56PM (#1287425)

        It doesn't seem like that to me. The volunteer can edit and still needs to approve what was replied.

        Would it be like a trained surgeon doing a surgey with a AI, the AI suggests what the surgeon could do?

        Someone posted that the volunteer wouldn't read it, just hit send, do we know that?

        • (Score: 3, Insightful) by tangomargarine on Wednesday January 18 2023, @09:53PM

          by tangomargarine (667) on Wednesday January 18 2023, @09:53PM (#1287438)

          It doesn't seem like that to me. The volunteer can edit and still needs to approve what was replied.

          Yeah, see my other comment below. [soylentnews.org]

          Would it be like a trained surgeon doing a surgery with a AI, the AI suggests what the surgeon could do?

          If you want to completely miss the point of the analogy, I guess.

          Someone posted that the volunteer wouldn't read it, just hit send, do we know that?

          Did you even *read* what I posted? Damn... Again--it was a metaphor.

          --
          "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
      • (Score: 1, Redundant) by crafoo on Wednesday January 18 2023, @10:30PM

        by crafoo (6639) on Wednesday January 18 2023, @10:30PM (#1287447)

        No it is not the same. Not even a little bit.

    • (Score: 4, Funny) by DannyB on Wednesday January 18 2023, @07:17PM (2 children)

      by DannyB (5839) Subscriber Badge on Wednesday January 18 2023, @07:17PM (#1287413) Journal

      Wasn't Clippy non consensual?

      Hi, I'm clippy! It looks like you're trying to write a suicide note. Can I help you with that?

      Or would Eliza be better?

      Patient: Eliza, I'm thinking of killing myself.
      Eliza: Please tell me more about why you think you should killing yourself.
      Patient: Someone I know recommended it.
      Eliza: Do you generally disregard recommendations from people you know?

      --
      To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
      • (Score: 0) by Anonymous Coward on Thursday January 19 2023, @12:39PM (1 child)

        by Anonymous Coward on Thursday January 19 2023, @12:39PM (#1287536)

        We also want the clippy version!

        • (Score: 2) by Immerman on Thursday January 19 2023, @04:18PM

          by Immerman (3985) on Thursday January 19 2023, @04:18PM (#1287567)

          I though we were trying to *avoid* having people commit suicide?

  • (Score: 1, Touché) by Anonymous Coward on Wednesday January 18 2023, @04:49PM

    by Anonymous Coward on Wednesday January 18 2023, @04:49PM (#1287382)

    Hi, sounds like you're mentally ill. Need any help with that? Before you start, can I recommend 6-8 glasses of water a day. Many report success with our proprietary blend of herbs for nighttime relief, coupled with a Wellness programme to suit your needs. Hey, it sounds like you're screaming and pulling out your hair? Need any help with that? Need any help? Help any need. Insert coin to continue, thank you for your custom.

  • (Score: 3, Touché) by Barenflimski on Wednesday January 18 2023, @05:09PM (1 child)

    by Barenflimski (6836) on Wednesday January 18 2023, @05:09PM (#1287384)

    Sounds to me like these people didn't realize that by logging onto the internet in 2023, that absolutely everything you type into a form, anywhere, will be gobbled up by a Machine Learning AI, and used against you.

    Is this not part of the EULA you agree to when you rip the cellophane off the AOL online disk?

    #logoff #cavelife

    • (Score: 2) by aafcac on Wednesday January 18 2023, @11:47PM

      by aafcac (17646) on Wednesday January 18 2023, @11:47PM (#1287464)

      That's not relevant, it's still a violation of the rules associated with research to conduct it without informed consent and usually running it by an ethics panel to consider the less obvious issues. When a simple skin swab sample needs a review ahead of time.

  • (Score: 2) by turgid on Wednesday January 18 2023, @10:08PM (2 children)

    by turgid (4318) Subscriber Badge on Wednesday January 18 2023, @10:08PM (#1287442) Journal

    Who'd be a human in this world? We're obsolete. We are in the process of isolating ourselves from everything. Soon we will be islands in an infinite sea of digital stimulation. What are we for any more?

    • (Score: 2) by optotronic on Thursday January 19 2023, @03:00AM (1 child)

      by optotronic (4285) on Thursday January 19 2023, @03:00AM (#1287496)

      What are we for any more?

      What were we for before now?

      I'm not being nihilistic, and I'm certainly troubled by recent changes in society mostly starting with social media. However, I don't believe humans exist for a purpose. If we did, it wouldn't be to make widgets so we can entertain ourselves or destroy the planet faster.

      • (Score: 3, Insightful) by Immerman on Thursday January 19 2023, @04:25PM

        by Immerman (3985) on Thursday January 19 2023, @04:25PM (#1287569)

        If you want purpose, we were all exhaustively designed by the ultimate non-intelligent design engine for exactly the same purpose as every other living thing: to have as many grandchildren as possible.

        If you want meaning... *that* you have to create for yourself. Though plenty of people seem eager to accept "being a cog in our machine" (religious, capitalist, etc.) as a convenient substitute.

(1)