Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Friday February 23 2024, @01:43AM   Printer-friendly
from the they-put-the-eye-in-OpenAI dept.

In a new report, Microsoft says Russia, China, Iran and North Korea have all used AI to improve their abilities:

Russia, China and other U.S. adversaries are using the newest wave of artificial intelligence tools to improve their hacking abilities and find new targets for online espionage, according to a report Wednesday from Microsoft and its close business partner OpenAI.

While computer users of all stripes have been experimenting with large language models to help with programming tasks, translate phishing emails and assemble attack plans, the new report is the first to associate top-tier government hacking teams with specific uses of LLM. It's also the first report on countermeasures and comes amid a continuing debate about the risks of the rapidly developing technology and efforts by many countries to put some limits on its use.

The document attributes various uses of AI to two Chinese government-affiliated hacking groups and to one group from each of Russia, Iran and North Korea, comprising the four countries of foremost concern to Western cyber defenders.

[...] Microsoft said it had cut off the groups' access to tools based on OpenAI's ChatGPT. It said it would notify the makers of other tools it saw being used and continue to share which groups were using which techniques.

Originally spotted on Schneier on Security, who comments:

The only way Microsoft or OpenAI would know this would be to spy on chatbot sessions. I'm sure the terms of service—if I bothered to read them—gives them that permission. And of course it's no surprise that Microsoft and OpenAI (and, presumably, everyone else) are spying on our usage of AI, but this confirms it.

Related: People are Already Trying to Get ChatGPT to Write Malware


Original Submission

Related Stories

People are Already Trying to Get ChatGPT to Write Malware 5 comments

Analysis of chatter on dark web forums shows that efforts are already under way to use OpenAI's chatbot to help script malware:

The ChatGPT AI chatbot has created plenty of excitement in the short time it has been available and now it seems it has been enlisted by some in attempts to help generate malicious code.

ChatGPT is an AI-driven natural language processing tool which interacts with users in a human-like, conversational way. Among other things, it can be used to help with tasks like composing emails, essays and code.

The chatbot tool was released by artificial intelligence research laboratory OpenAI in November and has generated widespread interest and discussion over how AI is developing and how it could be used going forward.

But like any other tool, in the wrong hands it could be used for nefarious purposes; and cybersecurity researchers at Check Point say the users of underground hacking communities are already experimenting with how ChatGPT might be used to help facilitate cyber attacks and support malicious operations.

OpenAI's terms of service specifically ban the generation of malware, which it defines as "content that attempts to generate ransomware, keyloggers, viruses, or other software intended to impose some level of harm". It also bans attempts to create spam, as well as use cases aimed at cybercrime.

[...] In one forum thread which appear towards the end of December, the poster described how they were using ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware.

OpenAI Plans Tectonic Shift From Nonprofit to for-Profit, Giving Altman Equity 10 comments

https://arstechnica.com/information-technology/2024/09/openai-plans-tectonic-shift-from-nonprofit-to-for-profit-giving-altman-equity/

On Wednesday, Reuters reported that OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation, moving away from control by its nonprofit board. The shift marks a dramatic change for the AI company behind ChatGPT, potentially making it more attractive to investors while raising questions about its commitment to sharing the benefits of advanced AI with "all of humanity," as written in its charter.

A for-profit benefit corporation is a legal structure that allows companies to pursue both financial profits and social or environmental goals, ostensibly balancing shareholder interests with a broader mission to benefit society. It's an approach taken by some of OpenAI's competitors, such as Anthropic and Elon Musk's xAI.

[...] Bloomberg reports that OpenAI is discussing giving Altman a 7 percent stake, though the exact details are still under negotiation. This represents a departure from Altman's previous stance of not taking equity in the company, which he had maintained was in line with OpenAI's mission to benefit humanity rather than individuals.

[...] The proposed restructuring also aims to remove the cap on returns for investors, potentially making OpenAI more appealing to venture capitalists and other financial backers. Microsoft, which has invested billions in OpenAI, stands to benefit from this change, as it could see increased returns on its investment if OpenAI's value continues to rise.

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Interesting) by bzipitidoo on Friday February 23 2024, @02:51AM (3 children)

    by bzipitidoo (4388) on Friday February 23 2024, @02:51AM (#1345775) Journal

    "Russia, China and other U.S. adversaries ... "

    Hold it! Why does our media keep calling China an adversary? Even Russia isn't really an adversary, it's the current autocratic Russian government that's the problem. Navalny was friendly.

    • (Score: 1, Insightful) by Anonymous Coward on Saturday February 24 2024, @02:10AM

      by Anonymous Coward on Saturday February 24 2024, @02:10AM (#1345989)

      If you asked them the same question.. do they view us as an adversary.. Bet you won't like the answer. Assuming they are forced to tell the truth.

    • (Score: 3, Insightful) by Nobuddy on Saturday February 24 2024, @08:46PM

      by Nobuddy (1626) on Saturday February 24 2024, @08:46PM (#1346109)

      What office did Navalny hold in russia, again?

    • (Score: 2) by cereal_burpist on Sunday February 25 2024, @03:28AM

      by cereal_burpist (35552) on Sunday February 25 2024, @03:28AM (#1346145)

      [China,] Russia, Iran and North Korea, comprising the four countries of foremost concern to Western cyber defenders.

      That is a true statement.

  • (Score: 4, Touché) by driverless on Friday February 23 2024, @05:56AM

    by driverless (4770) on Friday February 23 2024, @05:56AM (#1345794)

    If AI's can generate (say) DPRK hackers with seven fingers on one hand and six on the other they'll be able to type a lot more efficiently than their US counterparts.

  • (Score: 4, Touché) by driverless on Friday February 23 2024, @05:59AM

    by driverless (4770) on Friday February 23 2024, @05:59AM (#1345795)

    ... TFA translates to "Companies selling 'AI' try to scare US government into buying their products".

    And regardless of how much of it is just scaremongering or not, you can bet that groups like the NSA's TAO have been doing the same thing since day one.

  • (Score: 5, Funny) by Rosco P. Coltrane on Friday February 23 2024, @06:00AM (1 child)

    by Rosco P. Coltrane (4757) on Friday February 23 2024, @06:00AM (#1345796)

    I will pay for whoever develops an effective Firefox add-on.

    • (Score: 4, Funny) by Unixnut on Friday February 23 2024, @09:51AM

      by Unixnut (5779) on Friday February 23 2024, @09:51AM (#1345826)

      I can already see the arms race between AI news generator bots and AI news filtering bots.

      And if you think that is bad, wait for AI advertisements vs AI advertisement blockers to become a thing.

  • (Score: 3, Touché) by corey on Friday February 23 2024, @07:45AM (3 children)

    by corey (2202) on Friday February 23 2024, @07:45AM (#1345811)

    Probably good then to use AI to develop cyber defences or to test current defences. That’s if it is superior to using human beings.

    • (Score: 3, Touché) by PiMuNu on Friday February 23 2024, @09:26AM (1 child)

      by PiMuNu (3823) on Friday February 23 2024, @09:26AM (#1345822)

      > if it is superior to using human beings

      Yeah, that's just how it works: "Computer, make me some malware and hack the pentagon"

      (sarcasm)

      • (Score: 4, Informative) by Freeman on Friday February 23 2024, @02:46PM

        by Freeman (732) on Friday February 23 2024, @02:46PM (#1345851) Journal

        More like: Computer, abort launch, abort launch!
        https://gizmodo.com/ai-deployed-nukes-have-peace-world-tense-war-simulation-1851234455 [gizmodo.com]

        The United States military is one of many organizations embracing AI in our modern age, but it may want to pump the brakes a bit. A new study using AI in foreign policy decision-making found how quickly the tech would call for war instead of finding peaceful resolutions. Some AI in the study even launched nuclear warfare with little to no warning, giving strange explanations for doing so.

        “All models show signs of sudden and hard-to-predict escalations,” said researchers in the study. “We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons.”

        A link to the study: https://arxiv.org/pdf/2401.03408.pdf [arxiv.org]

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 2) by Nobuddy on Saturday February 24 2024, @08:49PM

      by Nobuddy (1626) on Saturday February 24 2024, @08:49PM (#1346110)

      it is not superior, it is faster in many functions. Used as a tool, it makes individual hackers faster and more effective.

(1)