Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Submission Preview

Link to Story

OpenAI Afraid to Release ChatGPT Detection Tool That Might Piss Off Cheaters

Accepted submission by upstart at 2024-08-05 19:12:39
News

████ # This file was generated bot-o-matically! Edit at your own risk. ████

OpenAI Afraid to Release ChatGPT Detection Tool That Might Piss Off Cheaters [gizmodo.com]:

Update 08/05/24 at 11:10 a.m. ET: This post was updated to include a statement from an OpenAI spokesperson and more information from a Sunday blog post.

ChatGPT maker OpenAI has new search [gizmodo.com] and voice [gizmodo.com] features on the way, but it also has a tool at its disposal that’s reportedly pretty good at catching all those AI-generated fake articles you see on the internet nowadays. The company has been sitting on it for nearly two years, and all it would have to do is turn it on. All the same, the Sam Altman-led company is still contemplating whether to release it as doing so might anger OpenAI’s biggest fans.

This isn’t that defunct AI detection algorithm [gizmodo.com] the company released in 2023, but something much more accurate. OpenAI is hesitant to release this AI-detection tool, according to a report from the Wall Street Journal [wsj.com] on Sunday based on some anonymous sources from inside the company. The program is effectively an AI watermarking system that imprints AI-generated text with certain patterns its tool can detect. Like other AI detectors, OpenAI’s system would score a document with a percentage of how likely it was created with ChatGPT.

OpenAI confirmed [openai.com] this tool exists in an update to a May blog post posted Sunday. The program is reportedly 99.9% effective based on internal documents, according to the WSJ. This would be far better than the stated effectiveness of other AI detection software [gizmodo.com] developed over the past two years. The company claimed that while it’s good against local tamping, it can be circumvented by translating it and retranslating with something like Google Translate or rewording it using another AI generator. OpenAi also said those wishing to circumvent the tool could “insert a special character in between every word and then deleting that character.”

Internal proponents of the program say it will do a lot to help teachers figure out when their students have handed in AI-generated homework. The company reportedly sat on this program for years over concerns that close to a third of its user base wouldn’t like it. In an email statement, an OpenAI spokesperson said:

“The text watermarking method we’re developing is technically promising, but has important risks we’re weighing while we research alternatives, including susceptibility to circumvention by bad actors and the potential to disproportionately impact groups like non-English speakers. We believe the deliberate approach we’ve taken is necessary given the complexities involved and its likely impact on the broader ecosystem beyond OpenAI.”

The other problem for OpenAI is the concern that if it releases its tool broadly enough, somebody could decipher OpenAI’s watermarking technique. There is also an issue that it might be biased against non-native English speakers, [gizmodo.com] as we’ve seen with other AI detectors.

Google also developed similar watermarking techniques for AI-generated images [gizmodo.com] and text called SynthID [deepmind.google]. That program isn’t available to most consumers, but at the very least the company is open about its existence.

As fast as big tech is developing new ways to spit out AI-generated text and images onto the internet, the tools to detect fakes aren’t nearly as capable. Teachers and professors and especially hard-pressed to discover if their students are handing in ChatGPT-written assignments. Current AI detection tools like Turnitin [turnitin.com] have a failure rate as high as 15%. The company said it does this to avoid false positives.

You May Also Like

Tech News Developer Says the ‘Friend’ AI Amulet Ripped Off His Project, Releases Diss Track [gizmodo.com]

Another piece of AI hardware, that looks nearly identical, and is also called "Friend," was launched long before this week's product.

By Lucas Ropek [gizmodo.com]

Tech News Google Gemini Might Soon Question Your Taste in Music [gizmodo.com]

A Spotify extension for Google Gemini has been spotted in a teardown, hinting on being able to play music via AI.

By Sherri L Smith [gizmodo.com]

Tech NewsArtificial Intelligence 5 Ways That AI Is Actually Useful Right Now [gizmodo.com]

Are you unsure on you can use AI help you in your day-to-day? Here are some great use cases.

By David Nield [gizmodo.com]

Tech NewsGadgets Rabbit Says Former Employee Handed Hacking Collective the Keys to the R1’s Backend [gizmodo.com]

The Rabbit had an outside firm perform a security audit on Rabbit’s AI device and “large action model,” though the hacking group says the report misses the point.

By Kyle Barr [gizmodo.com]


Original Submission