Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by janrinok on Saturday April 08 2023, @04:08PM   Printer-friendly

OpenAI threatened with landmark defamation lawsuit over ChatGPT false claims

https://arstechnica.com/tech-policy/2023/04/openai-may-be-sued-after-chatgpt-falsely-says-aussie-mayor-is-an-ex-con/

A spokesperson for Gordon Legal provided a statement to Ars confirming that responses to text prompts generated by ChatGPT 3.5 and 4 vary, with defamatory comments still currently being generated in ChatGPT 3.5. Among "several false statements" generated by ChatGPT were falsehoods stating that Brian Hood "was accused of bribing officials in Malaysia, Indonesia, and Vietnam between 1999 and 2005, that he was sentenced to 30 months in prison after pleading guilty to two counts of false accounting under the Corporations Act in 2012, and that he authorised payments to a Malaysian arms dealer acting as a middleman to secure a contract with the Malaysian Government." Because "all of these statements are false," Gordon Legal "filed a Concerns Notice to OpenAI" that detailed the inaccuracy and demanded a rectification. "As artificial intelligence becomes increasingly integrated into our society, the accuracy of the information provided by these services will come under close legal scrutiny," James Naughton, Hood's lawyer, said, noting that if a defamation claim is raised, it "will aim to remedy the harm caused" to Hood and "ensure the accuracy of this software in his case.")

It was only a matter of time before ChatGPT—an artificial intelligence tool that generates responses based on user text prompts—was threatened with its first defamation lawsuit. That happened last month, Reuters reported today, when an Australian regional mayor, Brian Hood, sent a letter on March 21 to the tool's developer, OpenAI, announcing his plan to sue the company for ChatGPT's alleged role in spreading false claims that he had gone to prison for bribery.

To avoid the landmark lawsuit, Hood gave OpenAI 28 days to modify ChatGPT's responses and stop the tool from spouting disinformation.

ChatGPT invented a sexual harassment scandal and named a real law prof as the accused

https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/

Archive link: https://archive.is/lJj3c

One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley's name was on the list.

The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he'd never been accused of harassing a student.

A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record.

"It was quite chilling," he said in an interview with The Post. "An allegation of this kind is incredibly harmful."

ChatGPT vs Google Bard: Which is better? We put them to the test.

https://arstechnica.com/information-technology/2023/04/clash-of-the-ai-titans-chatgpt-vs-bard-in-a-showdown-of-wits-and-wisdom/

In today's world of generative AI chatbots, we've witnessed the sudden rise of OpenAI's ChatGPT, introduced in November, followed by Bing Chat in February and Google's Bard in March. We decided to put these chatbots through their paces with an assortment of tasks to determine which one reigns supreme in the AI chatbot arena. Since Bing Chat uses similar GPT-4 technology as the latest ChatGPT model, we opted to focus on two titans of AI chatbot technology: OpenAI and Google.

We tested ChatGPT and Bard in seven critical categories: dad jokes, argument dialog, mathematical word problems, summarization, factual retrieval, creative writing, and coding. For each test, we fed the exact same instruction (called a "prompt") into ChatGPT (with GPT-4) and Google Bard. We used the first result, with no cherry-picking. Obviously, this is not a scientific study and is intended to be a fun comparison of the chatbots' capabilities. Outputs can vary between sessions due to random elements, and further evaluations with different prompts will produce different results. Also, the capabilities of these models will change rapidly over time as Google and OpenAI continue to upgrade them. But for now, this is how things stand in early April 2023.[....]


Original Submission #1Original Submission #2Original Submission #3Original Submission #4

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by Gaaark on Saturday April 08 2023, @04:14PM (5 children)

    by Gaaark (41) on Saturday April 08 2023, @04:14PM (#1300514) Journal

    Take the software away from corporate greed and place it in non-profit hands, for f*cks sake.

    Let it grow slowly and with proper 'parenting'/guidance. Let it grow to better mankind.

    I look at Star Trek (TOS, especially): as a kid i watched that show thinking "We could be this great. We could be like this." Now all i see is corporate greed and guidance away from "We could be this great" to "How much is this gonna make us, even if we have to f*ck people".

    --
    --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
    • (Score: 3, Touché) by takyon on Saturday April 08 2023, @08:40PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday April 08 2023, @08:40PM (#1300552) Journal

      Leak the software [arstechnica.com], put it in the hands of all the people who want it, and let it grow with no particular guidance. That is the way.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 3, Insightful) by Common Joe on Sunday April 09 2023, @08:13AM (2 children)

      by Common Joe (33) <common.joe.0101NO@SPAMgmail.com> on Sunday April 09 2023, @08:13AM (#1300608) Journal

      Ouch. Your comment hit me pretty good. With the exception of being raised on TNG, I've been muttering this too. Literally. About two weeks ago, I said something like this to my spouse.

      With almost every commitment I make, I'm usually asking how it not only affects me but the world in general. At work, I ask how it affects my users. The craziest part is how I get accused of blocking the "great ideas" some of coworkers spew at work because I see problems with the ideas they offer. In fact, it's quite clear the ideas openly harm the users. But I'm made out to be the bad guy because I should be focusing on our little group.

      Sadly, so many people don't know how to look at the big picture. And part of this is driven because other groups don't see the big picture, so they openly harm us (by making the easy decisions for their group), and all these bad decisions cause us to go into defensive mode -- which means clamming up and doing what's best for us instead of for the company.

      And then I come home, watch an episode of TNG to relax, and wonder "Where did we go wrong and how can we correct it?" Alas, I'm still looking for an answer -- not only for work, but for the world in general.

      • (Score: 4, Funny) by kazzie on Sunday April 09 2023, @11:14AM

        by kazzie (5309) Subscriber Badge on Sunday April 09 2023, @11:14AM (#1300621)

        And then I come home, watch an episode of TNG to relax, and wonder "Where did we go wrong and how can we correct it?"

        I think it was Star Trek: Enterprise. Don't know how to correct it, though.

      • (Score: 2) by SomeGuy on Sunday April 09 2023, @01:42PM

        by SomeGuy (5632) on Sunday April 09 2023, @01:42PM (#1300640)

        With almost every commitment I make, I'm usually asking how it not only affects me but the world in general. At work, I ask how it affects my users. The craziest part is how I get accused of blocking the "great ideas" some of coworkers spew at work because I see problems with the ideas they offer. In fact, it's quite clear the ideas openly harm the users.

        "What'cha mean you don't think we should water our crops with Brawndo? It's got what plants crave! It's right in the advertisement. It's a fact, they say so. Use water? You mean, like, from a TOILET? Yea, that was from an advertisement too. Why are you, like, being uncooperative? We are gonna sign a contract to EXCLUSIVELY use Brando on all the crops. It will be the STANDARD and we will all look good, cuz, its new, and pro-gress-ive, and uh, high tech, it has more molecules, and, like, you, know, not from toilet!"

        Sigh. Those with the most marketing dollars win.

    • (Score: 2) by Freeman on Monday April 10 2023, @02:56PM

      by Freeman (732) on Monday April 10 2023, @02:56PM (#1300753) Journal

      Starship Troopers is much more realistic as far as things go.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
  • (Score: 5, Insightful) by stormreaver on Saturday April 08 2023, @04:34PM (5 children)

    by stormreaver (5101) on Saturday April 08 2023, @04:34PM (#1300516)

    Large Language Models are great at forming coherent-sounding sentences, but they absolutely suck at accuracy. With any luck, those accuracy screwups will eventually be so severe as to get those LLM's sued back into their natural niche: entertainment. And hopefully we will be able to go another 30 years without having the hear about "AI" nonsense again -- until the AI hype cycle is renewed amongst a new, young crowd who has failed to learn from history (like we're experiencing now).

    • (Score: 2) by takyon on Saturday April 08 2023, @08:35PM (3 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday April 08 2023, @08:35PM (#1300546) Journal

      It's easy to forget that "AI" can also do fun things like fire your radiologist or enable the creation of an unprecedented surveillance state. The money flowing into the ChatGPT sector will land everywhere and improve society in exciting ways.

      As for LLMs, exponential amounts of hardware could produce linear improvements just good enough to keep them afloat. Or the tech giants could take common sense approaches, like hiring mechanical turks to assemble a massive wikibase of approved knowledge. Bonus points if you can rip a human brain out, put it in a jar, and hook it up directly to the LLM supercomputer.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by stormreaver on Sunday April 09 2023, @02:03AM (2 children)

        by stormreaver (5101) on Sunday April 09 2023, @02:03AM (#1300590)

        It's easy to forget that "AI" can also do fun things like fire your radiologist....

        It would take a fool of monumental proportions to use "AI" in place of a trained radiologist. The latter can tell the difference between a tumor reading and a speck of dust on the lense. The former would probably not.

        • (Score: 2) by takyon on Sunday April 09 2023, @02:52AM

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday April 09 2023, @02:52AM (#1300595) Journal

          It will end up being less people doing more work (with the assistance of friendly AI).

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by PiMuNu on Sunday April 09 2023, @11:17AM

          by PiMuNu (3823) on Sunday April 09 2023, @11:17AM (#1300622)

          Actually, image recognition is quite a good use for these glorified fitting routines. This sort of numerical analysis tool has been used by scientists for many decades.

    • (Score: 4, Informative) by crm114 on Saturday April 08 2023, @08:37PM

      by crm114 (8238) Subscriber Badge on Saturday April 08 2023, @08:37PM (#1300549)

      Well, Wikipedia says Eliza was released in 1966, which would be 57 years. Sigh.

  • (Score: 5, Insightful) by pTamok on Saturday April 08 2023, @08:38PM (18 children)

    by pTamok (3042) on Saturday April 08 2023, @08:38PM (#1300551)

    If you borrow a tool to commit a crime (e.g. a hammer to commit a murder), then the owner of the tool is not liable (barring conspiracy, or criminal negligence).

    If you hire a tool, and use it to commit a crime (e.g. a diamond core drill), then the owner of the tool is not liable (barring conspiracy, or criminal negligence).

    ChatGPT is a tool. So the operator of the tool is responsible. If you publish the output of ChatGPT as being true, you have the responsibility of demonstrating this in court. Good luck.

    But...but...ChatGPT is sentient!!! So? So is a horse. If you borrow or hire a horse to commit a crime, you are still liable, not the horse's owner. There is a wrinkle: because some animals are livestock, people can (in some cases) be liable for the animals actions https://www.fwi.co.uk/livestock/escaped-stock-liable-damage-property-people [fwi.co.uk] https://instrideedition.com/laws-vary-on-who-is-responsible-if-an-escaped-horse-causes-damage/ [instrideedition.com] so if we were to class ChatGPT as livestock (sentient), the owner/operator is still most likely liable.

    Now, there is probably an arguable case that the owners of ChatGPT are liable for contributory negligence because they are letting people use this flawed tool to generate falsehoods. Should be a fun case for legal scholars.

    • (Score: 3, Informative) by Anonymous Coward on Saturday April 08 2023, @09:19PM (2 children)

      by Anonymous Coward on Saturday April 08 2023, @09:19PM (#1300559)

      > ...
      > ... ChatGPT is a tool. So the operator of the tool is responsible.

      I'm a poor slob, get drunk at a bar and kill a pedestrian when I drive home. The bar tender (or the bar) will likely be sued for negligence by serving me too many drinks and letting me drive. No point in suing me for more than whatever liability insurance I carry, since I'm poor.

      Similar if I get into a car accident and there is any hint of a failure of the car as part of the cause of the accident. Many accident cases name the vehicle manufacturer in lawsuits, even if the operator of the car was clearly at fault.

      More generally, when someone gets hurt and the lawyers get involved, it's the deep pocket that winds up paying. You can bet that any lawsuits over ChatGPT (etc) may name the operator of the tool, but they are also going to name the giant tech company behind that tool.

      • (Score: 1, Insightful) by Anonymous Coward on Sunday April 09 2023, @09:43AM (1 child)

        by Anonymous Coward on Sunday April 09 2023, @09:43AM (#1300615)

        Thanks for so clearly explaining what's wrong with the legal system in this country.

        • (Score: 1, Informative) by Anonymous Coward on Sunday April 09 2023, @07:12PM

          by Anonymous Coward on Sunday April 09 2023, @07:12PM (#1300666)

          > Thanks for so clearly explaining what's wrong with the legal system in this country.

          You are welcome. Imo, "product liability" litigation started around 1960 (don't know details), but really got traction when Ralph Nader wrote "Unsafe at Any Speed" and took on the Detroit car companies--and focused on the Chevy/GM Corvair in particular. From then on, going after product manufacturers (deep pockets) has been open season.

          I'd say the results are mixed. Initially, I'm willing to believe that this trend resulted in safer products (at higher costs), but we're now in an era where the pendulum probably has swung too far.

    • (Score: 0) by Anonymous Coward on Saturday April 08 2023, @09:23PM (1 child)

      by Anonymous Coward on Saturday April 08 2023, @09:23PM (#1300560)

      It should be possible for a GPT thingummy to argue legal cases very accurately since it can easily know and cite all possible legislation. A bit like what happened with chess - eventually the chess robots produce better moves than any human, but humans can verify the answers.

      • (Score: 0) by Anonymous Coward on Saturday April 08 2023, @10:00PM

        by Anonymous Coward on Saturday April 08 2023, @10:00PM (#1300565)

        It helps that judges can be convinced by "incorrect" interpretations of law. At the end of the day, the winner is decided by some assholes in black robes.

    • (Score: 4, Touché) by EJ on Sunday April 09 2023, @12:32AM (6 children)

      by EJ (2452) on Sunday April 09 2023, @12:32AM (#1300581)

      A megaphone is a tool. If you build your megaphone with a button that, when pressed, plays a loud audio file announcing that "[xyz] is a sexual predator," then you are possibly liable for whatever harm comes from that defamatory statement if played in public.

      In this case, if you read the article summary properly, people were not using the tool to commit a crime. They did the equivalent of looking up a "fact" in an encyclopedia, which provided a libelous claim in its text.

      I am not a lawyer, but I do have a bunch of popcorn. I'm interested to see how this plays out.

      • (Score: 2) by EJ on Sunday April 09 2023, @12:35AM

        by EJ (2452) on Sunday April 09 2023, @12:35AM (#1300582)

        Also, the concept of a "simple legal situation" is an oxymoron.

      • (Score: 3, Insightful) by SomeGuy on Sunday April 09 2023, @01:25PM (2 children)

        by SomeGuy (5632) on Sunday April 09 2023, @01:25PM (#1300637)

        In this case, if you read the article summary properly, people were not using the tool to commit a crime. They did the equivalent of looking up a "fact" in an encyclopedia, which provided a libelous claim in its text.

        That is part of the problem, it's not an encyclopedia, but people are treating it that way because the AI unicorn magic is so pervasive.

        It doesn't think, it doesn't understand, it doesn't create, it just babbles whatever crap it has been fed.

        Much like how idiots choose to believe "driver assist" means "self driving", until they wind up as a pancake on the side of the road.

        • (Score: 1, Interesting) by Anonymous Coward on Sunday April 09 2023, @07:18PM

          by Anonymous Coward on Sunday April 09 2023, @07:18PM (#1300667)

          > it's not an encyclopedia,

          Exactly. Now, if the output from ChatGPT was phrased like this:

                  "From the sources I've seen in my training set, I believe the answer is ..."

          then I'd be a lot more forgiving of these things. Make these '"AIs" so the output leaves wiggle room for errors and, just by the phrasing, encourages the user to check other sources.

        • (Score: 0) by Anonymous Coward on Monday April 10 2023, @01:35AM

          by Anonymous Coward on Monday April 10 2023, @01:35AM (#1300693)

          it doesn't create, it just babbles whatever crap it has been fed.

          So which source were those defamatory remarks from? Citation please?

          From an acquaintance's experience it can actually make stuff up. It can claim that A did B even if there's no website or article saying that A did B.

      • (Score: 2) by choose another one on Sunday April 09 2023, @08:33PM (1 child)

        by choose another one (515) Subscriber Badge on Sunday April 09 2023, @08:33PM (#1300670)

        They did the equivalent of looking up a "fact" in an encyclopedia, which provided a libelous claim in its text.

        No they didn't. They pressed the button that does "make up plausible sounding shit about X" and supplied X as something about [xyz] and sexual misdeeds. Then they published generated fiction without marking it as such.

        These things are not encyclopedias they do not contain a store of information, they are a model that has been trained to produce plausible (sometimes very plausible) words about what you ask them to.

        Look a few days back at the Stable Diffusion lawsuit article for example, like your megaphone button the plaintiffs:

        describe Stable Diffusion as a "complex collage tool" that contains "compressed copies" of its training images

        A minimal look at the size of the model tells you that either that is complete BS or they have invented some magic image compression that is several orders of magnitude better than anything we have today. Guess which I think?

        Stable Diffusion is not a library of images, ChatGPT is not an encylopedia of facts, Stable Diffusion generates plausible made up images from a prompt you give it, ChatGPT generates plausible made up text from a prompt you give it. End of story (for now).

        Here's another example: https://www.dailymail.co.uk/sciencetech/article-11948855/ChatGPT-falsely-accuses-law-professor-SEX-ATTACK-against-students.html [dailymail.co.uk]

        Apparently ChatGPT:
        - reported a harrassment claim against a US Professor that was never made
        - on a trip that never occurred
        - while in a faculty that he never taught at
        - referencing a newpaper article that doesn't exist
        - and a newspaper statement that was never made

        This was in reponse to a request to provide examples of US Professors sexually harrassing students (or something like that).

        But ChatGPT doesn't have a database (and it is not connected to one, or to "the internet") of actual examples of that (or anything else), it's picked a plausible sounding name, location, scenario, faculty, newspaper etc. and written some examples. It's (probably randomly / accidentally) got the name of a real professor, but it's not actually accusing _him_ - just some fictional guy with a completely different job in a completely different place.

        It. Makes. Shit. Up.

        Sooner people start realising this the better.
        Because when we might have to worry is (a) when it does get hooked up to the internet to get "facts" from, (b) when it can train itslef further on what it finds out there and (c) when some idiot makes that connection two way and asks it to "make it real".

    • (Score: 0) by Anonymous Coward on Sunday April 09 2023, @01:44AM

      by Anonymous Coward on Sunday April 09 2023, @01:44AM (#1300587)
      But if you borrow a "talking machine" and when you ask it to give you info about someone it says defamatory stuff about that someone are you at fault or the maker of the machine?
    • (Score: 2) by Beryllium Sphere (r) on Sunday April 09 2023, @03:27AM

      by Beryllium Sphere (r) (5062) on Sunday April 09 2023, @03:27AM (#1300598)

      It doesn't sound simple when I read lawyer blogs on the subject.

      The other point of view is that a company who has poorly designed or maintained machinery can be sued and lose if that machinery huts a member of the public. That strikes me as a closer parallel to ChatGPT.

    • (Score: 2) by isostatic on Sunday April 09 2023, @08:17PM (2 children)

      by isostatic (365) on Sunday April 09 2023, @08:17PM (#1300668) Journal

      Their tool is generating falsehoolds. It doesn't say "blahblah.com state Joe Bloggs is a sexual predator", it states "Joe Bloggs is a sexual predator"

      This is the same as a publisher printing a "who's who" and under Joe Bloggs it says they are a sexual predator.

      The publisher would be liable for damage and the book would be under an injunction and withdrawn from sale immediately

      • (Score: 2) by choose another one on Sunday April 09 2023, @08:44PM (1 child)

        by choose another one (515) Subscriber Badge on Sunday April 09 2023, @08:44PM (#1300673)

        Their tool is generating falsehoolds. It doesn't say "blahblah.com state Joe Bloggs is a sexual predator",

        It is a tool for generating falsehoods. It is documented as such. It describes itself as such:
        My responses are not intended to be taken as fact or advice, but rather as a starting point for further discussion.
        [you can google the quote, plenty of references to it].

        Oh and, actually it has said exactly that, see my other comment above but to quote that article verbatim:

        ChatGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper

        • (Score: 2) by isostatic on Tuesday April 11 2023, @09:49AM

          by isostatic (365) on Tuesday April 11 2023, @09:49AM (#1300932) Journal

          such smallprint quite rightly won't save you in many countries

    • (Score: 2) by legont on Monday April 10 2023, @04:44AM

      by legont (4179) on Monday April 10 2023, @04:44AM (#1300716)

      The thing is that in our legal system whoever has the best paralegal team wins. Now, ChatGPT, being a language model, is a perfect paralegal. So, it's gonna win all and any law battles.

      --
      "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
  • (Score: 3, Touché) by Anonymous Coward on Sunday April 09 2023, @05:16AM

    by Anonymous Coward on Sunday April 09 2023, @05:16AM (#1300602)

    Can you argue that nobody would reasonably believe chatGPT was anything other than entertainment, and nobody would reasonably take it seriously? *wink wink*

  • (Score: 2, Touché) by Coligny on Sunday April 09 2023, @11:49PM

    by Coligny (2200) on Sunday April 09 2023, @11:49PM (#1300685)

    You mean that “garbage in, garbage out” is still the root of all data processing failures ?!

    WHO THE FRACK WOULD HAVE KNOWN !

    We just switched from “the computer can’t be wrong” to “the AI can’t be wrong” charly foxtrot…

    --
    If I wanted to be moderated by mor0nic groupthinking retards I would still be on Digg and Reddshit.
(1)