Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday April 08, @04:08PM   Printer-friendly

OpenAI threatened with landmark defamation lawsuit over ChatGPT false claims

https://arstechnica.com/tech-policy/2023/04/openai-may-be-sued-after-chatgpt-falsely-says-aussie-mayor-is-an-ex-con/

A spokesperson for Gordon Legal provided a statement to Ars confirming that responses to text prompts generated by ChatGPT 3.5 and 4 vary, with defamatory comments still currently being generated in ChatGPT 3.5. Among "several false statements" generated by ChatGPT were falsehoods stating that Brian Hood "was accused of bribing officials in Malaysia, Indonesia, and Vietnam between 1999 and 2005, that he was sentenced to 30 months in prison after pleading guilty to two counts of false accounting under the Corporations Act in 2012, and that he authorised payments to a Malaysian arms dealer acting as a middleman to secure a contract with the Malaysian Government." Because "all of these statements are false," Gordon Legal "filed a Concerns Notice to OpenAI" that detailed the inaccuracy and demanded a rectification. "As artificial intelligence becomes increasingly integrated into our society, the accuracy of the information provided by these services will come under close legal scrutiny," James Naughton, Hood's lawyer, said, noting that if a defamation claim is raised, it "will aim to remedy the harm caused" to Hood and "ensure the accuracy of this software in his case.")

It was only a matter of time before ChatGPT—an artificial intelligence tool that generates responses based on user text prompts—was threatened with its first defamation lawsuit. That happened last month, Reuters reported today, when an Australian regional mayor, Brian Hood, sent a letter on March 21 to the tool's developer, OpenAI, announcing his plan to sue the company for ChatGPT's alleged role in spreading false claims that he had gone to prison for bribery.

To avoid the landmark lawsuit, Hood gave OpenAI 28 days to modify ChatGPT's responses and stop the tool from spouting disinformation.

ChatGPT invented a sexual harassment scandal and named a real law prof as the accused

https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/

Archive link: https://archive.is/lJj3c

One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley's name was on the list.

The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he'd never been accused of harassing a student.

A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record.

"It was quite chilling," he said in an interview with The Post. "An allegation of this kind is incredibly harmful."

ChatGPT vs Google Bard: Which is better? We put them to the test.

https://arstechnica.com/information-technology/2023/04/clash-of-the-ai-titans-chatgpt-vs-bard-in-a-showdown-of-wits-and-wisdom/

In today's world of generative AI chatbots, we've witnessed the sudden rise of OpenAI's ChatGPT, introduced in November, followed by Bing Chat in February and Google's Bard in March. We decided to put these chatbots through their paces with an assortment of tasks to determine which one reigns supreme in the AI chatbot arena. Since Bing Chat uses similar GPT-4 technology as the latest ChatGPT model, we opted to focus on two titans of AI chatbot technology: OpenAI and Google.

We tested ChatGPT and Bard in seven critical categories: dad jokes, argument dialog, mathematical word problems, summarization, factual retrieval, creative writing, and coding. For each test, we fed the exact same instruction (called a "prompt") into ChatGPT (with GPT-4) and Google Bard. We used the first result, with no cherry-picking. Obviously, this is not a scientific study and is intended to be a fun comparison of the chatbots' capabilities. Outputs can vary between sessions due to random elements, and further evaluations with different prompts will produce different results. Also, the capabilities of these models will change rapidly over time as Google and OpenAI continue to upgrade them. But for now, this is how things stand in early April 2023.[....]


Original Submission #1Original Submission #2Original Submission #3Original Submission #4

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by isostatic on Sunday April 09, @08:17PM (2 children)

    by isostatic (365) on Sunday April 09, @08:17PM (#1300668) Journal

    Their tool is generating falsehoolds. It doesn't say "blahblah.com state Joe Bloggs is a sexual predator", it states "Joe Bloggs is a sexual predator"

    This is the same as a publisher printing a "who's who" and under Joe Bloggs it says they are a sexual predator.

    The publisher would be liable for damage and the book would be under an injunction and withdrawn from sale immediately

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by choose another one on Sunday April 09, @08:44PM (1 child)

    by choose another one (515) on Sunday April 09, @08:44PM (#1300673)

    Their tool is generating falsehoolds. It doesn't say "blahblah.com state Joe Bloggs is a sexual predator",

    It is a tool for generating falsehoods. It is documented as such. It describes itself as such:
    My responses are not intended to be taken as fact or advice, but rather as a starting point for further discussion.
    [you can google the quote, plenty of references to it].

    Oh and, actually it has said exactly that, see my other comment above but to quote that article verbatim:

    ChatGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper