The ChatGPT AI chatbot has created plenty of excitement in the short time it has been available and now it seems it has been enlisted by some in attempts to help generate malicious code.
ChatGPT is an AI-driven natural language processing tool which interacts with users in a human-like, conversational way. Among other things, it can be used to help with tasks like composing emails, essays and code.
The chatbot tool was released by artificial intelligence research laboratory OpenAI in November and has generated widespread interest and discussion over how AI is developing and how it could be used going forward.
But like any other tool, in the wrong hands it could be used for nefarious purposes; and cybersecurity researchers at Check Point say the users of underground hacking communities are already experimenting with how ChatGPT might be used to help facilitate cyber attacks and support malicious operations.
OpenAI's terms of service specifically ban the generation of malware, which it defines as "content that attempts to generate ransomware, keyloggers, viruses, or other software intended to impose some level of harm". It also bans attempts to create spam, as well as use cases aimed at cybercrime.
[...] In one forum thread which appear towards the end of December, the poster described how they were using ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware.
Researchers at security firm Check Point Research reported Friday that within a few weeks of ChatGPT going live, participants in cybercrime forums—some with little or no coding experience—were using it to write software and emails that could be used for espionage, ransomware, malicious spam, and other malicious tasks.
"It's still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web," company researchers wrote. "However, the cybercriminal community has already shown significant interest and are jumping into this latest trend to generate malicious code."
Related Stories
A language model trained on the fringes of the dark web... for science:
We're still early in the snowball effect unleashed by the release of Large Language Models (LLMs) like ChatGPT into the wild. Paired with the open-sourcing of other GPT (Generative Pre-Trained Transformer) models, the number of applications employing AI is exploding; and as we know, ChatGPT itself can be used to create highly advanced malware.
As time passes, applied LLMs will only increase, each specializing in their own area, trained on carefully curated data for a specific purpose. And one such application just dropped, one that was trained on data from the dark web itself. DarkBERT, as its South Korean creators called it, has arrived — follow that link for the release paper, which gives an overall introduction to the dark web itself.
DarkBERT is based on the RoBERTa architecture, an AI approach developed back in 2019. It has seen a renaissance of sorts, with researchers discovering it actually had more performance to give than could be extracted from it in 2019. It seems the model was severely undertrained when released, far below its maximum efficiency.
Originally spotted on The Eponymous Pickle.
Related: People are Already Trying to Get ChatGPT to Write Malware
In a new report, Microsoft says Russia, China, Iran and North Korea have all used AI to improve their abilities:
Russia, China and other U.S. adversaries are using the newest wave of artificial intelligence tools to improve their hacking abilities and find new targets for online espionage, according to a report Wednesday from Microsoft and its close business partner OpenAI.
While computer users of all stripes have been experimenting with large language models to help with programming tasks, translate phishing emails and assemble attack plans, the new report is the first to associate top-tier government hacking teams with specific uses of LLM. It's also the first report on countermeasures and comes amid a continuing debate about the risks of the rapidly developing technology and efforts by many countries to put some limits on its use.
The document attributes various uses of AI to two Chinese government-affiliated hacking groups and to one group from each of Russia, Iran and North Korea, comprising the four countries of foremost concern to Western cyber defenders.
[...] Microsoft said it had cut off the groups' access to tools based on OpenAI's ChatGPT. It said it would notify the makers of other tools it saw being used and continue to share which groups were using which techniques.
Originally spotted on Schneier on Security, who comments:
The only way Microsoft or OpenAI would know this would be to spy on chatbot sessions. I'm sure the terms of service—if I bothered to read them—gives them that permission. And of course it's no surprise that Microsoft and OpenAI (and, presumably, everyone else) are spying on our usage of AI, but this confirms it.
Related: People are Already Trying to Get ChatGPT to Write Malware
(Score: 4, Insightful) by Freeman on Tuesday January 10 2023, @02:23PM (1 child)
Subject title of this message was original department line of the submission. Seriously, if you're using it to "write malware" and then go and use that malware. You're putting up a neon sign that says, I'm an idiot who just did something illegal. Sure, people may ignore your neon sign that you put up, but it's still dumb. It's not terribly difficult to gain access to ChatGPT, so maybe they made sure they had Anonymous access to ChatGPT. I wouldn't bet on that, though.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 2) by Zinho on Tuesday January 10 2023, @04:29PM
At least the cybercriminals don't need to worry about copyright violations [soylentnews.org] the bot commits when writing the malware. In for a penny, in for a pound...
"Space Exploration is not endless circles in low earth orbit." -Buzz Aldrin
(Score: 1, Funny) by Anonymous Coward on Tuesday January 10 2023, @03:21PM (2 children)
Hey Siri, ChapGPT just called yo momma a ho'. Whatchu gon do about it?
(Score: 4, Touché) by Freeman on Tuesday January 10 2023, @05:09PM (1 child)
"I'm sorry Dave, I'm afraid I can't reply to that." --Siri (Probably)
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 0) by Anonymous Coward on Wednesday January 11 2023, @03:53PM
Dat's rite, ho'. You shutcho mouth round me.