Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Submission Preview

Link to Story

Merge: janrinok (03/28 14:55 GMT)

Accepted submission by janrinok at 2023-03-28 14:55:12
News

Europol Warns ChatGPT is Already Helping Criminals

████ # This file was generated bot-o-matically! Edit at your own risk. ████

Europol warns ChatGPT is already helping criminals [theregister.com]:

Criminals are already using ChatGPT to commit crimes, Europol said in a Monday report that details how AI language models can fuel fraud, cybercrime, and terrorism.

Built by OpenAI, ChatGPT was released [theregister.com] in November 2022 and quickly became an internet sensation as netizens flocked to the site to have the chatbot generate essays, jokes, emails, programing code, and all manner of other text.

Now, the European Union's law enforcement agency, Europol, has detailed of how the model can be misused for more nefarious purposes. In fact, people are already using it to carry out illegal activities, the cops claim.

"The impact these types of models might have on the work of law enforcement can already be anticipated," Europol stated in its report [europa.eu] [PDF]. "Criminals are typically quick to exploit new technologies and were fast seen coming up with concrete criminal exploitations, providing the first practical examples mere weeks after the public release of ChatGPT."

Although ChatGPT is better at refusing to comply with input requests that are potentially harmful, users have found ways around OpenAI's content filter system. Some have made it spit out instructions on how to create a pipe bomb or crack cocaine, for example. Netizens can ask ChatGPT to learn about how to commit crimes and ask it for step-by-step guidance.

"If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps. As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home, to terrorism, cybercrime and child sexual abuse," Europol warned.

The agency admitted that all of this information is already publicly available on the internet, but the model makes it easier to find and understand how to carry out specific crimes. Europol also highlighted that the model could be exploited to impersonate targets, facilitate fraud and phishing, or produce propaganda and disinformation to support terrorism.

ChatGPT's ability to generate code - even malicious code - increases the risk of cybercrime by lowering the technical skills required to create malware.

"For a potential criminal with little technical knowledge, this is an invaluable resource. At the same time, a more advanced user can exploit these improved capabilities to further refine or even automate sophisticated cybercriminal modi operandi,"* the report said.

Large language models (LLMs) are unsophisticated and still in their infancy, but they're rapidly improving as tech companies invest resources in developing the technology. OpenAI has released GPT-4 [theregister.com], a more powerful system, already, and these models are being increasingly integrated into products. Microsoft and Google have both launched AI-powered web search chatbots into their search engines since the release of ChatGPT.

Europol said that as more companies roll out AI features and services, it will open up new ways to use the technology for illegal activities. "Multimodal AI systems, which combine conversational chatbots with systems that can produce synthetic media, such as highly convincing deepfakes, or include sensory abilities, such as seeing and hearing," the law enforcement org's report suggested.

Clandestine versions of language models with no content filters and trained on harmful data could be hosted on the dark web, for example.

"Finally, there are uncertainties regarding how LLM services may process user data in the future – will conversations be stored and potentially expose sensitive personal information to unauthorised third parties? And if users are generating harmful content, should this be reported to law enforcement authorities?," Europol asked. ®

* That's the plural of modus operandi.

Get ourTech Resources [theregister.com] ShareSimilar topics

  • AI
  • ChatGPT
  • Police

More like these×Similar topics

  • AI
  • ChatGPT
  • Police

Narrower topics

  • Google AI
  • GPT-3
  • Interpol
  • Machine Learning
  • MCubed
  • NLP
  • Star Wars
  • Tensor Processing Unit

Broader topics

  • OpenAI
  • Self-driving Car

Similar topics ShareSimilar topics

  • AI
  • ChatGPT
  • Police

More like these×Similar topics

  • AI
  • ChatGPT
  • Police

Narrower topics

  • Google AI
  • GPT-3
  • Interpol
  • Machine Learning
  • MCubed
  • NLP
  • Star Wars
  • Tensor Processing Unit

Broader topics

  • OpenAI
  • Self-driving Car

TIP US OFF

Send us news [theregister.com]

Google and Microsoft's Chatbots Are Already Citing One Another

It's not a good sign for the future of online misinformation [theverge.com]:

If you don't believe the rushed launch of AI chatbots by Big Tech has an extremely strong chance of degrading the web's information ecosystem, consider the following:

Right now,* if you ask Microsoft's Bing chatbot if Google's Bard chatbot has been shut down, it says yes, citing as evidence a news article [windowscentral.com] that discusses a tweet [twitter.com] in which a user asked Bard when it would be shut down and Bard said it already had, itself citing a comment [ycombinator.com] from Hacker News in which someone joked about this happening, and someone else used ChatGPT to write fake news coverage about the event.

(*I say "right now" because in the time between starting and finishing writing this story, Bing changed its answer and now correctly replies that Bard is still live. You can interpret this as showing that these systems are, at least, fixable or that they are so infinitely malleable that it's impossible to even consistently report their mistakes.)

What we have here is an early sign we're stumbling into a massive game of AI misinformation telephone, in which chatbots are unable to gauge reliable news sources, misread stories about themselves, and misreport on their own capabilities. In this case, the whole thing started because of a single joke comment on Hacker News. Imagine what you could do if you wanted these systems to fail.

It's a laughable situation but one with potentially serious consequences. Given the inability of AI language models to reliably sort fact from fiction, their launch online threatens to unleash a rotten trail of misinformation and mistrust across the web, a miasma that is impossible to map completely or debunk authoritatively. All because Microsoft, Google, and OpenAI have decided that market share is more important than safety.


Original Submission #1  Original Submission #2