Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Friday February 23, @01:43AM   Printer-friendly
from the they-put-the-eye-in-OpenAI dept.

In a new report, Microsoft says Russia, China, Iran and North Korea have all used AI to improve their abilities:

Russia, China and other U.S. adversaries are using the newest wave of artificial intelligence tools to improve their hacking abilities and find new targets for online espionage, according to a report Wednesday from Microsoft and its close business partner OpenAI.

While computer users of all stripes have been experimenting with large language models to help with programming tasks, translate phishing emails and assemble attack plans, the new report is the first to associate top-tier government hacking teams with specific uses of LLM. It's also the first report on countermeasures and comes amid a continuing debate about the risks of the rapidly developing technology and efforts by many countries to put some limits on its use.

The document attributes various uses of AI to two Chinese government-affiliated hacking groups and to one group from each of Russia, Iran and North Korea, comprising the four countries of foremost concern to Western cyber defenders.

[...] Microsoft said it had cut off the groups' access to tools based on OpenAI's ChatGPT. It said it would notify the makers of other tools it saw being used and continue to share which groups were using which techniques.

Originally spotted on Schneier on Security, who comments:

The only way Microsoft or OpenAI would know this would be to spy on chatbot sessions. I'm sure the terms of service—if I bothered to read them—gives them that permission. And of course it's no surprise that Microsoft and OpenAI (and, presumably, everyone else) are spying on our usage of AI, but this confirms it.

Related: People are Already Trying to Get ChatGPT to Write Malware


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Touché) by PiMuNu on Friday February 23, @09:26AM (1 child)

    by PiMuNu (3823) on Friday February 23, @09:26AM (#1345822)

    > if it is superior to using human beings

    Yeah, that's just how it works: "Computer, make me some malware and hack the pentagon"

    (sarcasm)

    Starting Score:    1  point
    Moderation   +1  
       Touché=1, Total=1
    Extra 'Touché' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 4, Informative) by Freeman on Friday February 23, @02:46PM

    by Freeman (732) on Friday February 23, @02:46PM (#1345851) Journal

    More like: Computer, abort launch, abort launch!
    https://gizmodo.com/ai-deployed-nukes-have-peace-world-tense-war-simulation-1851234455 [gizmodo.com]

    The United States military is one of many organizations embracing AI in our modern age, but it may want to pump the brakes a bit. A new study using AI in foreign policy decision-making found how quickly the tech would call for war instead of finding peaceful resolutions. Some AI in the study even launched nuclear warfare with little to no warning, giving strange explanations for doing so.

    “All models show signs of sudden and hard-to-predict escalations,” said researchers in the study. “We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons.”

    A link to the study: https://arxiv.org/pdf/2401.03408.pdf [arxiv.org]

    --
    Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"