Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Tuesday February 14, @11:15PM   Printer-friendly

The tool could let teachers spot plagiarism or help social media platforms fight disinformation bots:

Hidden patterns purposely buried in AI-generated texts could help identify them as such, allowing us to tell whether the words we're reading are written by a human or not.

These "watermarks" are invisible to the human eye but let computers detect that the text probably comes from an AI system. If embedded in large language models, they could help prevent some of the problems that these models have already caused.

For example, since OpenAI's chatbot ChatGPT was launched in November, students have already started cheating by using it to write essays for them. News website CNET has used ChatGPT to write articles, only to have to issue corrections amid accusations of plagiarism. Building the watermarking approach into such systems before they're released could help address such problems.

In studies, these watermarks have already been used to identify AI-generated text with near certainty. Researchers at the University of Maryland, for example, were able to spot text created by Meta's open-source language model, OPT-6.7B, using a detection algorithm they built. The work is described in a paper that's yet to be peer-reviewed, and the code will be available for free around February 15.

[...] There are limitations to this new method, however. Watermarking only works if it is embedded in the large language model by its creators right from the beginning. Although OpenAI is reputedly working on methods to detect AI-generated text, including watermarks, the research remains highly secretive. The company doesn't tend to give external parties much information about how ChatGPT works or was trained, much less access to tinker with it. OpenAI didn't immediately respond to our request for comment.

Related:


Original Submission

Related Stories

Welcome Testers ... 25 comments
By the time this post goes live, we'll be proudly serving our first few users. Right now, we are still working on nailing some outstanding issues with slash (most notably, moderation), but we're at the point that I'd like to get some early feedback, and allow others to run through the system and test stuff. Please make sure to read the preceding posts, feel free to comment, and enjoy!

EDIT: Struck a naught word -- NCommander
Re:First Post by Anonymous Coward
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of 22 comments

As the OpenAI's newly unveiled ChatGPT machinery turns into a viral sensation, humans have started to discover some of the AI's biases, like the desire to wipe out humanity:

Yesterday, BleepingComputer ran a piece listing 10 coolest things you can do with ChatGPT. And, that doesn't even begin to cover all use cases like having the AI compose music for you [1, 2].

[...] As more and more netizens play with ChatGPT's preview, coming to surface are some of the cracks in AI's thinking as its creators rush to mend them in real time.

Included in the list is:

  • 'Selfish' humans 'deserve to be wiped out'
  • It can write phishing emails, software and malware
  • It's capable of being sexist, racist, ...
  • It's convincing even when it's wrong
ChatGPT Arrives in the Academic World 4 comments

ChatGPT arrives in the academic world:

AI art and text generators are all the rage right now. As an academic, I've seen an uptick in colleagues issuing warnings about students using tools like ChatGPT to create assignments, but I haven't yet really done too much investigation—I've been too busy grading final papers! But I recently came across two posts by academics that somewhat relieve the immediate worry about students successfully using ChatGPT to write their papers, and also raise challenges for educators about what we are actually doing in our classrooms.

First, here's Dan Vollaro's 'conversation' with ChatGPT about Moby Dick. Vollaro is an Associate Professor of English with a PhD in 19th Century American literature, and tries to discuss Melville's famous novel with the bot. While ChatGPT does pretty well at first—providing facts about when the novel was written, how it was received, and even some of the major themes that scholars have identified in the text—it fails spectacularly when asked which scholars were responsible for discussing and circulating those themes, or when asked to provide its own analysis or critique of the novel. In a series of frustrating back-and-forths, the bot repeats almost the same answer to various questions from Vollaro. When Vollaro asks: "Do you have a favorite character from the novel?" The bot answers:

As a language model, I don't have personal experiences or opinions, so I don't have a favorite character from Moby Dick. I'm here to provide accurate and helpful information based on the text data I've been trained on, but I don't have personal preferences or biases. I'm a neutral and objective source of information, and I'm always happy to help with any questions you have about the novel.

Seattle Public Schools Bans ChatGPT; District ‘Requires Original Thought and Work From Students’ 18 comments

https://www.geekwire.com/2023/seattle-public-schools-bans-chatgpt-district-requires-original-thought-and-work-from-students/

Seattle Public Schools is joining a growing number of school districts banning ChatGPT, the natural language chatbot from OpenAI that has sparked widespread attention in recent weeks.

ChatGPT has garnered praise for its ability to quickly answer complex queries and instantly produce content.

But it's also generating concern among educators worried that students will use the technology to do their homework.

SPS blocked ChatGPT on all school devices in December, said Tim Robinson, a spokesman for Seattle Public Schools, in an email to GeekWire.

"Like all school districts, Seattle Public Schools does not allow cheating and requires original thought and work from students," he said.

The district also blocks other "cheating tools," Robinson said.


Original Submission

BuzzFeed Preps AI-Written Content While CNET Fumbles 7 comments

200 percent BuzzFeed stock rise might signal start of a "pivot to AI" media trend:

On Thursday, an internal memo obtained by The Wall Street Journal revealed that BuzzFeed is planning to use ChatGPT-style text synthesis technology from OpenAI to create individualized quizzes and potentially other content in the future. After the news hit, BuzzFeed's stock rose 200 percent. On Friday, BuzzFeed formally announced the move in a post on its site.

[...] "The creative process will increasingly become AI-assisted and technology-enabled. If the past 15 years of the internet have been defined by algorithmic feeds that curate and recommend content, the next 15 years will be defined by AI and data helping create, personalize, and animate the content itself. Our industry will expand beyond AI-powered curation (feeds), to AI-powered creation (content). AI opens up a new era of creativity, where creative humans like us play a key role providing the ideas, cultural currency, inspired prompts, IP, and formats that come to life using the newest technologies."


Original Submission

Robots Let ChatGPT Touch the Real World Thanks to Microsoft 15 comments

https://arstechnica.com/information-technology/2023/02/robots-let-chatgpt-touch-the-real-world-thanks-to-microsoft/

Last week, Microsoft researchers announced an experimental framework to control robots and drones using the language abilities of ChatGPT, a popular AI language model created by OpenAI. Using natural language commands, ChatGPT can write special code that controls robot movements. A human then views the results and adjusts as necessary until the task gets completed successfully.

The research arrived in a paper titled "ChatGPT for Robotics: Design Principles and Model Abilities," authored by Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor of the Microsoft Autonomous Systems and Robotics Group.

In a demonstration video, Microsoft shows robots—apparently controlled by code written by ChatGPT while following human instructions—using a robot arm to arrange blocks into a Microsoft logo, flying a drone to inspect the contents of a shelf, or finding objects using a robot with vision capabilities.

To get ChatGPT to interface with robotics, the researchers taught ChatGPT a custom robotics API. When given instructions like "pick up the ball," ChatGPT can generate robotics control code just as it would write a poem or complete an essay. After a human inspects and edits the code for accuracy and safety, the human operator can execute the task and evaluate its performance.

In this way, ChatGPT accelerates robotic control programming, but it's not an autonomous system. "We emphasize that the use of ChatGPT for robotics is not a fully automated process," reads the paper, "but rather acts as a tool to augment human capacity."

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by looorg on Tuesday February 14, @11:54PM (1 child)

    by looorg (578) on Tuesday February 14, @11:54PM (#1291798)

    If there is a way to implement a watermark there will be another tool to obfuscate, scramble and remove it. AI creation war to follow.

    • (Score: 2) by fliptop on Tuesday February 14, @11:58PM

      by fliptop (1666) on Tuesday February 14, @11:58PM (#1291799) Journal

      there will be another tool to obfuscate, scramble and remove it

      Unfortunately, it's another moving target to be dealt w/.

      --
      To be oneself, and unafraid whether right or wrong, is more admirable than the easy cowardice of surrender to conformity
  • (Score: 1, Interesting) by Anonymous Coward on Wednesday February 15, @01:53PM

    by Anonymous Coward on Wednesday February 15, @01:53PM (#1291874)

    It's like the difference between content and adverts. From one perspective, they are almost identical - words, sounds, pictures - but from another they are completely different. Someone wrote somewhere that AI text reads like the kind of essay you write in high school - almost devoid of content and a lot of verbiage to make it sound grown up, but basically a disguised rehash of whatever source it was copied from. *yawn* wake me up when you grow a brainstem.

  • (Score: 3, Interesting) by DannyB on Wednesday February 15, @05:15PM (1 child)

    by DannyB (5839) Subscriber Badge on Wednesday February 15, @05:15PM (#1291905) Journal

    Imagine an academic who is accused of cheating because what s/he wrote gets a false positive detection of having been written by an AI.

    What would be the defense?

    Suppose the AIs learn to write their output so that it evades efforts to watermark it by humans who they they are in control of the AI.

    --
    How often should I have my memory checked? I used to know but...
    • (Score: 4, Interesting) by looorg on Wednesday February 15, @06:02PM

      by looorg (578) on Wednesday February 15, @06:02PM (#1291912)

      That already happens with the various anti-plagarism detectors, your text matches by some high % with other texts. So it's assumed you looked at or copied their text and failed to cite properly. It is very hard to defend against that accusation. AI generated text will just build on that problem.

(1)