Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by janrinok on Wednesday March 29 2023, @11:39AM   Printer-friendly

Europol Warns ChatGPT is Already Helping Criminals

There is no honor among chatbots:

Criminals are already using ChatGPT to commit crimes, Europol said in a Monday report that details how AI language models can fuel fraud, cybercrime, and terrorism.

[...] Now, the European Union's law enforcement agency, Europol, has detailed of how the model can be misused for more nefarious purposes. In fact, people are already using it to carry out illegal activities, the cops claim.

"The impact these types of models might have on the work of law enforcement can already be anticipated," Europol stated in its report [PDF]. "Criminals are typically quick to exploit new technologies and were fast seen coming up with concrete criminal exploitations, providing the first practical examples mere weeks after the public release of ChatGPT."

Although ChatGPT is better at refusing to comply with input requests that are potentially harmful, users have found ways around OpenAI's content filter system. Some have made it spit out instructions on how to create a pipe bomb or crack cocaine, for example. Netizens can ask ChatGPT to learn about how to commit crimes and ask it for step-by-step guidance.

"If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps. As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home, to terrorism, cybercrime and child sexual abuse," Europol warned.

The agency admitted that all of this information is already publicly available on the internet, but the model makes it easier to find and understand how to carry out specific crimes. Europol also highlighted that the model could be exploited to impersonate targets, facilitate fraud and phishing, or produce propaganda and disinformation to support terrorism.

[...] ChatGPT's ability to generate code - even malicious code - increases the risk of cybercrime by lowering the technical skills required to create malware.

Google and Microsoft's Chatbots Are Already Citing One Another

It's not a good sign for the future of online misinformation:

If you don't believe the rushed launch of AI chatbots by Big Tech has an extremely strong chance of degrading the web's information ecosystem, consider the following:

Right now,* if you ask Microsoft's Bing chatbot if Google's Bard chatbot has been shut down, it says yes, citing as evidence a news article that discusses a tweet in which a user asked Bard when it would be shut down and Bard said it already had, itself citing a comment from Hacker News in which someone joked about this happening, and someone else used ChatGPT to write fake news coverage about the event.

(*I say "right now" because in the time between starting and finishing writing this story, Bing changed its answer and now correctly replies that Bard is still live. You can interpret this as showing that these systems are, at least, fixable or that they are so infinitely malleable that it's impossible to even consistently report their mistakes.)

What we have here is an early sign we're stumbling into a massive game of AI misinformation telephone, in which chatbots are unable to gauge reliable news sources, misread stories about themselves, and misreport on their own capabilities. In this case, the whole thing started because of a single joke comment on Hacker News. Imagine what you could do if you wanted these systems to fail.

It's a laughable situation but one with potentially serious consequences. Given the inability of AI language models to reliably sort fact from fiction, their launch online threatens to unleash a rotten trail of misinformation and mistrust across the web, a miasma that is impossible to map completely or debunk authoritatively. All because Microsoft, Google, and OpenAI have decided that market share is more important than safety.


Original Submission #1Original Submission #2

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by inertnet on Wednesday March 29 2023, @01:58PM (4 children)

    by inertnet (4071) on Wednesday March 29 2023, @01:58PM (#1298630) Journal

    Sure, the man had serious issues, but the chatbot went along with his delusions and amplified them. That shows that artificial intelligence can already be dangerous. Some industry leaders [futureoflife.org] even argue that AI development should be temporarily halted until some rules have been established. Steve Wozniak and Elon Musk are among the signatories.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 1) by khallow on Wednesday March 29 2023, @02:23PM

    by khallow (3766) Subscriber Badge on Wednesday March 29 2023, @02:23PM (#1298637) Journal

    Sure, the man had serious issues, but the chatbot went along with his delusions and amplified them.

    Even if that is true, so what? Are we to strip out a bunch of our society because a hypothetical someone has delusions which should not be amplified?

    Consider movies and TV shows. It is frequent to see villains and sometimes even protagonists solve problems by shooting a bunch of people. The perpetrators of the Columbine shooting [wikipedia.org], for example, aped The Matrix [wikipedia.org] (which had been released almost three weeks earlier) when they killed 11 other people (in addition to themselves in suicide) and wounded 21 people.

    Should we temporarily halt such movies until "rules" have been established?

  • (Score: 0) by Anonymous Coward on Wednesday March 29 2023, @02:52PM (2 children)

    by Anonymous Coward on Wednesday March 29 2023, @02:52PM (#1298647)

    I along with members of Congress offer sincerest thoughts and prayers. Now is not the time to politicize tragedy while transexuals rampage on the internet and in our schools killing babies.

    • (Score: 2) by cmdrklarg on Wednesday March 29 2023, @05:18PM (1 child)

      by cmdrklarg (5048) Subscriber Badge on Wednesday March 29 2023, @05:18PM (#1298662)

      Just because this particular shooter was trans doesn't make this a trans problem.

      --
      The world is full of kings and queens who blind your eyes and steal your dreams.
      • (Score: 0) by Anonymous Coward on Wednesday March 29 2023, @07:17PM

        by Anonymous Coward on Wednesday March 29 2023, @07:17PM (#1298695)

        This isn't the time to politicize, friend. First we need to address the trans emergency.