In a new report, Microsoft says Russia, China, Iran and North Korea have all used AI to improve their abilities:
Russia, China and other U.S. adversaries are using the newest wave of artificial intelligence tools to improve their hacking abilities and find new targets for online espionage, according to a report Wednesday from Microsoft and its close business partner OpenAI.
While computer users of all stripes have been experimenting with large language models to help with programming tasks, translate phishing emails and assemble attack plans, the new report is the first to associate top-tier government hacking teams with specific uses of LLM. It's also the first report on countermeasures and comes amid a continuing debate about the risks of the rapidly developing technology and efforts by many countries to put some limits on its use.
The document attributes various uses of AI to two Chinese government-affiliated hacking groups and to one group from each of Russia, Iran and North Korea, comprising the four countries of foremost concern to Western cyber defenders.
[...] Microsoft said it had cut off the groups' access to tools based on OpenAI's ChatGPT. It said it would notify the makers of other tools it saw being used and continue to share which groups were using which techniques.
Originally spotted on Schneier on Security, who comments:
The only way Microsoft or OpenAI would know this would be to spy on chatbot sessions. I'm sure the terms of service—if I bothered to read them—gives them that permission. And of course it's no surprise that Microsoft and OpenAI (and, presumably, everyone else) are spying on our usage of AI, but this confirms it.
Related: People are Already Trying to Get ChatGPT to Write Malware
Related Stories
On Wednesday, Reuters reported that OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation, moving away from control by its nonprofit board. The shift marks a dramatic change for the AI company behind ChatGPT, potentially making it more attractive to investors while raising questions about its commitment to sharing the benefits of advanced AI with "all of humanity," as written in its charter.
A for-profit benefit corporation is a legal structure that allows companies to pursue both financial profits and social or environmental goals, ostensibly balancing shareholder interests with a broader mission to benefit society. It's an approach taken by some of OpenAI's competitors, such as Anthropic and Elon Musk's xAI.
[...] Bloomberg reports that OpenAI is discussing giving Altman a 7 percent stake, though the exact details are still under negotiation. This represents a departure from Altman's previous stance of not taking equity in the company, which he had maintained was in line with OpenAI's mission to benefit humanity rather than individuals.
[...] The proposed restructuring also aims to remove the cap on returns for investors, potentially making OpenAI more appealing to venture capitalists and other financial backers. Microsoft, which has invested billions in OpenAI, stands to benefit from this change, as it could see increased returns on its investment if OpenAI's value continues to rise.
The ChatGPT AI chatbot has created plenty of excitement in the short time it has been available and now it seems it has been enlisted by some in attempts to help generate malicious code.
ChatGPT is an AI-driven natural language processing tool which interacts with users in a human-like, conversational way. Among other things, it can be used to help with tasks like composing emails, essays and code.
The chatbot tool was released by artificial intelligence research laboratory OpenAI in November and has generated widespread interest and discussion over how AI is developing and how it could be used going forward.
But like any other tool, in the wrong hands it could be used for nefarious purposes; and cybersecurity researchers at Check Point say the users of underground hacking communities are already experimenting with how ChatGPT might be used to help facilitate cyber attacks and support malicious operations.
OpenAI's terms of service specifically ban the generation of malware, which it defines as "content that attempts to generate ransomware, keyloggers, viruses, or other software intended to impose some level of harm". It also bans attempts to create spam, as well as use cases aimed at cybercrime.
[...] In one forum thread which appear towards the end of December, the poster described how they were using ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware.
(Score: 3, Interesting) by bzipitidoo on Friday February 23 2024, @02:51AM (3 children)
"Russia, China and other U.S. adversaries ... "
Hold it! Why does our media keep calling China an adversary? Even Russia isn't really an adversary, it's the current autocratic Russian government that's the problem. Navalny was friendly.
(Score: 1, Insightful) by Anonymous Coward on Saturday February 24 2024, @02:10AM
If you asked them the same question.. do they view us as an adversary.. Bet you won't like the answer. Assuming they are forced to tell the truth.
(Score: 3, Insightful) by Nobuddy on Saturday February 24 2024, @08:46PM
What office did Navalny hold in russia, again?
(Score: 2) by cereal_burpist on Sunday February 25 2024, @03:28AM
That is a true statement.
(Score: 4, Touché) by driverless on Friday February 23 2024, @05:56AM
If AI's can generate (say) DPRK hackers with seven fingers on one hand and six on the other they'll be able to type a lot more efficiently than their US counterparts.
(Score: 4, Touché) by driverless on Friday February 23 2024, @05:59AM
... TFA translates to "Companies selling 'AI' try to scare US government into buying their products".
And regardless of how much of it is just scaremongering or not, you can bet that groups like the NSA's TAO have been doing the same thing since day one.
(Score: 5, Funny) by Rosco P. Coltrane on Friday February 23 2024, @06:00AM (1 child)
I will pay for whoever develops an effective Firefox add-on.
(Score: 4, Funny) by Unixnut on Friday February 23 2024, @09:51AM
I can already see the arms race between AI news generator bots and AI news filtering bots.
And if you think that is bad, wait for AI advertisements vs AI advertisement blockers to become a thing.
(Score: 3, Touché) by corey on Friday February 23 2024, @07:45AM (3 children)
Probably good then to use AI to develop cyber defences or to test current defences. That’s if it is superior to using human beings.
(Score: 3, Touché) by PiMuNu on Friday February 23 2024, @09:26AM (1 child)
> if it is superior to using human beings
Yeah, that's just how it works: "Computer, make me some malware and hack the pentagon"
(sarcasm)
(Score: 4, Informative) by Freeman on Friday February 23 2024, @02:46PM
More like: Computer, abort launch, abort launch!
https://gizmodo.com/ai-deployed-nukes-have-peace-world-tense-war-simulation-1851234455 [gizmodo.com]
A link to the study: https://arxiv.org/pdf/2401.03408.pdf [arxiv.org]
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 2) by Nobuddy on Saturday February 24 2024, @08:49PM
it is not superior, it is faster in many functions. Used as a tool, it makes individual hackers faster and more effective.