Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by janrinok on Wednesday March 29 2023, @11:39AM   Printer-friendly

Europol Warns ChatGPT is Already Helping Criminals

There is no honor among chatbots:

Criminals are already using ChatGPT to commit crimes, Europol said in a Monday report that details how AI language models can fuel fraud, cybercrime, and terrorism.

[...] Now, the European Union's law enforcement agency, Europol, has detailed of how the model can be misused for more nefarious purposes. In fact, people are already using it to carry out illegal activities, the cops claim.

"The impact these types of models might have on the work of law enforcement can already be anticipated," Europol stated in its report [PDF]. "Criminals are typically quick to exploit new technologies and were fast seen coming up with concrete criminal exploitations, providing the first practical examples mere weeks after the public release of ChatGPT."

Although ChatGPT is better at refusing to comply with input requests that are potentially harmful, users have found ways around OpenAI's content filter system. Some have made it spit out instructions on how to create a pipe bomb or crack cocaine, for example. Netizens can ask ChatGPT to learn about how to commit crimes and ask it for step-by-step guidance.

"If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps. As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home, to terrorism, cybercrime and child sexual abuse," Europol warned.

The agency admitted that all of this information is already publicly available on the internet, but the model makes it easier to find and understand how to carry out specific crimes. Europol also highlighted that the model could be exploited to impersonate targets, facilitate fraud and phishing, or produce propaganda and disinformation to support terrorism.

[...] ChatGPT's ability to generate code - even malicious code - increases the risk of cybercrime by lowering the technical skills required to create malware.

Google and Microsoft's Chatbots Are Already Citing One Another

It's not a good sign for the future of online misinformation:

If you don't believe the rushed launch of AI chatbots by Big Tech has an extremely strong chance of degrading the web's information ecosystem, consider the following:

Right now,* if you ask Microsoft's Bing chatbot if Google's Bard chatbot has been shut down, it says yes, citing as evidence a news article that discusses a tweet in which a user asked Bard when it would be shut down and Bard said it already had, itself citing a comment from Hacker News in which someone joked about this happening, and someone else used ChatGPT to write fake news coverage about the event.

(*I say "right now" because in the time between starting and finishing writing this story, Bing changed its answer and now correctly replies that Bard is still live. You can interpret this as showing that these systems are, at least, fixable or that they are so infinitely malleable that it's impossible to even consistently report their mistakes.)

What we have here is an early sign we're stumbling into a massive game of AI misinformation telephone, in which chatbots are unable to gauge reliable news sources, misread stories about themselves, and misreport on their own capabilities. In this case, the whole thing started because of a single joke comment on Hacker News. Imagine what you could do if you wanted these systems to fail.

It's a laughable situation but one with potentially serious consequences. Given the inability of AI language models to reliably sort fact from fiction, their launch online threatens to unleash a rotten trail of misinformation and mistrust across the web, a miasma that is impossible to map completely or debunk authoritatively. All because Microsoft, Google, and OpenAI have decided that market share is more important than safety.


Original Submission #1Original Submission #2

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Troll) by Rosco P. Coltrane on Wednesday March 29 2023, @11:42AM (7 children)

    by Rosco P. Coltrane (4757) on Wednesday March 29 2023, @11:42AM (#1298621)

    More ChatGPT / Bing News

    No thanks. It's coming so thick and fast the nausea just doesn't have time to pass.

    Okay we get it already: Microsoft is the best thing since sliced bread, we've entered the Golden Age of Artificial Inteliigenceness. Bing isn't a shit search engine anymore, but a shit search engine that looks like you're talking to it. Etc etc...
    Give it a fucking rest already.

    • (Score: 4, Informative) by janrinok on Wednesday March 29 2023, @02:24PM (6 children)

      by janrinok (52) Subscriber Badge on Wednesday March 29 2023, @02:24PM (#1298638) Journal

      I can only post the submissions that we have. So far I am searching for them, collecting them, processing them, and posting them to the queue.

      At the top of this page it says that there are only 19 stories in the queue. 13 of them are unsuitable or lacking in content. We need at least 8 publishable stories every weekday, and that usually means at least double that in the submission queue.

      Sorry guys but there are only so many hours in my day too.

      • (Score: 5, Funny) by Anonymous Coward on Wednesday March 29 2023, @02:48PM (1 child)

        by Anonymous Coward on Wednesday March 29 2023, @02:48PM (#1298645)

        Have you considered asking chatGPT to squeeze out 8 fresh steaming piles of stories per day?

        • (Score: 3, Funny) by janrinok on Wednesday March 29 2023, @03:14PM

          by janrinok (52) Subscriber Badge on Wednesday March 29 2023, @03:14PM (#1298655) Journal

          I laughed at this one too - have another up mod!

      • (Score: 4, Interesting) by Rosco P. Coltrane on Wednesday March 29 2023, @02:51PM (3 children)

        by Rosco P. Coltrane (4757) on Wednesday March 29 2023, @02:51PM (#1298646)

        I wasn't bashing on you or SN specifically. GPT, Microsoft and their new bitch OpenAI are everywhere and really getting really properly nauseating at this point. Even more nauseating:

        1/ OpenAI is a big data whore now - all AI and no open. They're quite the enemy now, and as much of a threat as early Google was.
        2/ Microsoft has suddenly become the hottest thing in town? Really??? Come on... If that doesn't make you vomit a little...

        So yeah sorry, nothing personal.It was more a cry of despair than a criticism of SN's choice of articles.

        • (Score: 3, Funny) by janrinok on Wednesday March 29 2023, @03:12PM

          by janrinok (52) Subscriber Badge on Wednesday March 29 2023, @03:12PM (#1298654) Journal

          lol - have an up mod!

        • (Score: 0) by Anonymous Coward on Wednesday March 29 2023, @07:15PM (1 child)

          by Anonymous Coward on Wednesday March 29 2023, @07:15PM (#1298694)

          Enjoy it while it lasts. The infinite deluge of unfilterable spam is almost here - we will look back at these golden years of actually being able to communicate with eachother. A decade of robot-driven paranoia is going to turn our societies into rubble.

          • (Score: 0) by Anonymous Coward on Wednesday March 29 2023, @11:26PM

            by Anonymous Coward on Wednesday March 29 2023, @11:26PM (#1298736)

            Much of my interaction with friends is not mediated through/by a computer -- I go and visit them. Second best is voice on the phone (land line), which probably goes through a computer, but not one that interrupts us with adverts. So I offer this as a counter to your dystopia.

  • (Score: 2) by inertnet on Wednesday March 29 2023, @12:07PM (8 children)

    by inertnet (4071) on Wednesday March 29 2023, @12:07PM (#1298622) Journal

    More chatbot news so I'll post it here again. Someone committed suicide [brusselstimes.com] after chatting with a chatbot for weeks.

    • (Score: 3, Interesting) by BsAtHome on Wednesday March 29 2023, @12:25PM (5 children)

      by BsAtHome (889) on Wednesday March 29 2023, @12:25PM (#1298626)

      Sigh... How about this headline:

      Man commits suicide after seeing the news for years on end and realizing that mankind is a vicious species making life miserable for itself, the world and everything else.

      Unfortunately, the Belgian man had already offspring. Otherwise, a Darwin Award would have been appropriate.

      Technology, just like anythings else, will cause people to go over the edge. While a tragedy for the family, it is not new. Can we now stop the generated text hype and go back to our sober selfs? Please turn on your brain again and think for yourself. At least you won't be (too) bored when you die with an enabled mind.

      • (Score: 3, Interesting) by inertnet on Wednesday March 29 2023, @01:58PM (4 children)

        by inertnet (4071) on Wednesday March 29 2023, @01:58PM (#1298630) Journal

        Sure, the man had serious issues, but the chatbot went along with his delusions and amplified them. That shows that artificial intelligence can already be dangerous. Some industry leaders [futureoflife.org] even argue that AI development should be temporarily halted until some rules have been established. Steve Wozniak and Elon Musk are among the signatories.

        • (Score: 1) by khallow on Wednesday March 29 2023, @02:23PM

          by khallow (3766) Subscriber Badge on Wednesday March 29 2023, @02:23PM (#1298637) Journal

          Sure, the man had serious issues, but the chatbot went along with his delusions and amplified them.

          Even if that is true, so what? Are we to strip out a bunch of our society because a hypothetical someone has delusions which should not be amplified?

          Consider movies and TV shows. It is frequent to see villains and sometimes even protagonists solve problems by shooting a bunch of people. The perpetrators of the Columbine shooting [wikipedia.org], for example, aped The Matrix [wikipedia.org] (which had been released almost three weeks earlier) when they killed 11 other people (in addition to themselves in suicide) and wounded 21 people.

          Should we temporarily halt such movies until "rules" have been established?

        • (Score: 0) by Anonymous Coward on Wednesday March 29 2023, @02:52PM (2 children)

          by Anonymous Coward on Wednesday March 29 2023, @02:52PM (#1298647)

          I along with members of Congress offer sincerest thoughts and prayers. Now is not the time to politicize tragedy while transexuals rampage on the internet and in our schools killing babies.

          • (Score: 2) by cmdrklarg on Wednesday March 29 2023, @05:18PM (1 child)

            by cmdrklarg (5048) Subscriber Badge on Wednesday March 29 2023, @05:18PM (#1298662)

            Just because this particular shooter was trans doesn't make this a trans problem.

            --
            The world is full of kings and queens who blind your eyes and steal your dreams.
            • (Score: 0) by Anonymous Coward on Wednesday March 29 2023, @07:17PM

              by Anonymous Coward on Wednesday March 29 2023, @07:17PM (#1298695)

              This isn't the time to politicize, friend. First we need to address the trans emergency.

    • (Score: 4, Interesting) by istartedi on Wednesday March 29 2023, @05:48PM (1 child)

      by istartedi (123) on Wednesday March 29 2023, @05:48PM (#1298675) Journal

      If your mental illness is severe enough, you can be pushed over the edge by just about anything. If the Chatbot were not there to assist in his downward spiral, any number of apocalyptic religious channels could certainly have done the job just as well. If you've gotten to the point where you can't remind yourself that you're talking to a flawed AI, or listening and/or reading to flawed human beings then you're already quite vulnerable. I don't think we can conclude at this point that AI is particularly better at pushing people off bridges than any number of other things that are part of society.

      --
      Appended to the end of comments you post. Max: 120 chars.
      • (Score: 0) by Anonymous Coward on Wednesday March 29 2023, @11:36PM

        by Anonymous Coward on Wednesday March 29 2023, @11:36PM (#1298741)

        > I don't think we can conclude at this point that AI is particularly better at pushing people off bridges...

        Agreed.
        However, the AI has the dubious advantage that it's almost certainly cheaper than other approaches. When the AI is on a smart phone the availability is 24/7, and the phone is likely filled with dopamine-hit-generators to make it addictive.

  • (Score: 3, Interesting) by DannyB on Wednesday March 29 2023, @02:11PM (5 children)

    by DannyB (5839) Subscriber Badge on Wednesday March 29 2023, @02:11PM (#1298635) Journal

    I asked ChatGPT:

    Write an HTTPS library in commented x86 assembly with error handling and logging.

    Could it do that? Of course it didn’t.

    Perhaps my request was unclear.

    --
    When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
    • (Score: 4, Touché) by Rosco P. Coltrane on Wednesday March 29 2023, @02:59PM (2 children)

      by Rosco P. Coltrane (4757) on Wednesday March 29 2023, @02:59PM (#1298649)

      If you had asked for a Web-two-point-five rich interface with a SOAP-Ajax-RESTful-Ruby-on-rails-of-cocaine written in Go with Python bindings, it would have answered. Assembly... ChatGPT is way too cool to do assembly.

      • (Score: 3, Funny) by DannyB on Wednesday March 29 2023, @03:01PM

        by DannyB (5839) Subscriber Badge on Wednesday March 29 2023, @03:01PM (#1298651) Journal

        Sum Assembly Required.

        --
        When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
      • (Score: 2) by corey on Wednesday March 29 2023, @09:34PM

        by corey (2202) on Wednesday March 29 2023, @09:34PM (#1298715)

        Don’t forget the Vue.js frontend!

    • (Score: 0) by Anonymous Coward on Wednesday March 29 2023, @03:05PM

      by Anonymous Coward on Wednesday March 29 2023, @03:05PM (#1298653)

      It's unclear that it won't be able to do that within a year or two. Meanwhile, Google Tard won't produce any code.

    • (Score: 5, Touché) by janrinok on Wednesday March 29 2023, @03:16PM

      by janrinok (52) Subscriber Badge on Wednesday March 29 2023, @03:16PM (#1298656) Journal

      It spent a long time in github looking for "well-commented" assembly - that was your mistake....

  • (Score: 4, Informative) by https on Wednesday March 29 2023, @05:24PM (1 child)

    by https (5248) on Wednesday March 29 2023, @05:24PM (#1298665) Journal

    If you're trusting a GPT to provide you with correct instructions for a pipe bomb, your lifetime is most easily stated in number of hour remaining.

    ChatGPT has no mental model of anything except "this is probably the grammar humans use when they're feeling confident".

    --
    Offended and laughing about it.
    • (Score: 0) by Anonymous Coward on Wednesday March 29 2023, @07:19PM

      by Anonymous Coward on Wednesday March 29 2023, @07:19PM (#1298696)

      Get it to wave the flag and you've got yourself a cable news channel.

(1)