Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.
posted by janrinok on Wednesday March 29 2023, @11:39AM   Printer-friendly

Europol Warns ChatGPT is Already Helping Criminals

There is no honor among chatbots:

Criminals are already using ChatGPT to commit crimes, Europol said in a Monday report that details how AI language models can fuel fraud, cybercrime, and terrorism.

[...] Now, the European Union's law enforcement agency, Europol, has detailed of how the model can be misused for more nefarious purposes. In fact, people are already using it to carry out illegal activities, the cops claim.

"The impact these types of models might have on the work of law enforcement can already be anticipated," Europol stated in its report [PDF]. "Criminals are typically quick to exploit new technologies and were fast seen coming up with concrete criminal exploitations, providing the first practical examples mere weeks after the public release of ChatGPT."

Although ChatGPT is better at refusing to comply with input requests that are potentially harmful, users have found ways around OpenAI's content filter system. Some have made it spit out instructions on how to create a pipe bomb or crack cocaine, for example. Netizens can ask ChatGPT to learn about how to commit crimes and ask it for step-by-step guidance.

"If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps. As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home, to terrorism, cybercrime and child sexual abuse," Europol warned.

The agency admitted that all of this information is already publicly available on the internet, but the model makes it easier to find and understand how to carry out specific crimes. Europol also highlighted that the model could be exploited to impersonate targets, facilitate fraud and phishing, or produce propaganda and disinformation to support terrorism.

[...] ChatGPT's ability to generate code - even malicious code - increases the risk of cybercrime by lowering the technical skills required to create malware.

Google and Microsoft's Chatbots Are Already Citing One Another

It's not a good sign for the future of online misinformation:

If you don't believe the rushed launch of AI chatbots by Big Tech has an extremely strong chance of degrading the web's information ecosystem, consider the following:

Right now,* if you ask Microsoft's Bing chatbot if Google's Bard chatbot has been shut down, it says yes, citing as evidence a news article that discusses a tweet in which a user asked Bard when it would be shut down and Bard said it already had, itself citing a comment from Hacker News in which someone joked about this happening, and someone else used ChatGPT to write fake news coverage about the event.

(*I say "right now" because in the time between starting and finishing writing this story, Bing changed its answer and now correctly replies that Bard is still live. You can interpret this as showing that these systems are, at least, fixable or that they are so infinitely malleable that it's impossible to even consistently report their mistakes.)

What we have here is an early sign we're stumbling into a massive game of AI misinformation telephone, in which chatbots are unable to gauge reliable news sources, misread stories about themselves, and misreport on their own capabilities. In this case, the whole thing started because of a single joke comment on Hacker News. Imagine what you could do if you wanted these systems to fail.

It's a laughable situation but one with potentially serious consequences. Given the inability of AI language models to reliably sort fact from fiction, their launch online threatens to unleash a rotten trail of misinformation and mistrust across the web, a miasma that is impossible to map completely or debunk authoritatively. All because Microsoft, Google, and OpenAI have decided that market share is more important than safety.


Original Submission #1Original Submission #2

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Funny) by janrinok on Wednesday March 29 2023, @03:12PM

    by janrinok (52) Subscriber Badge on Wednesday March 29 2023, @03:12PM (#1298654) Journal

    lol - have an up mod!

    --
    I am not interested in knowing who people are or where they live. My interest starts and stops at our servers.
    Starting Score:    1  point
    Moderation   +1  
       Funny=1, Total=1
    Extra 'Funny' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3