Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday April 03, @08:24PM   Printer-friendly

Simply look out for libraries imagined by ML and make them real, with actual malicious code. No wait, don't do that:

Several big businesses have published source code that incorporates a software package previously hallucinated by generative AI.

Not only that but someone, having spotted this reoccurring hallucination, had turned that made-up dependency into a real one, which was subsequently downloaded and installed thousands of times by developers as a result of the AI's bad advice, we've learned. If the package was laced with actual malware, rather than being a benign test, the results could have been disastrous.

According to Bar Lanyado, security researcher at Lasso Security, one of the businesses fooled by AI into incorporating the package is Alibaba, which at the time of writing still includes a pip command to download the Python package huggingface-cli in its GraphTranslator installation instructions.

There is a legit huggingface-cli, installed using pip install -U "huggingface_hub[cli]".

But the huggingface-cli distributed via the Python Package Index (PyPI) and required by Alibaba's GraphTranslator – installed using pip install huggingface-cli – is fake, imagined by AI and turned real by Lanyado as an experiment.

He created huggingface-cli in December after seeing it repeatedly hallucinated by generative AI; by February this year, Alibaba was referring to it in GraphTranslator's README instructions rather than the real Hugging Face CLI tool.

[...] The willingness of AI models to confidently cite non-existent court cases is now well known and has caused no small amount of embarrassment among attorneys unaware of this tendency. And as it turns out, generative AI models will do the same for software packages.

[...] So far at least, this technique hasn't been used in an actual attack that Lanyado is aware of.

"Besides our hallucinated package (our package is not malicious it is just an example of how easy and dangerous it could be to leverage this technique), I have yet to identify an exploit of this attack technique by malicious actors," he said. "It is important to note that it's complicated to identify such an attack, as it doesn't leave a lot of footsteps."


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by crafoo on Wednesday April 03, @09:11PM (1 child)

    by crafoo (6639) on Wednesday April 03, @09:11PM (#1351530)

    fantastic. awesome. this is so amazing. I love everything about this. AI is going to open up so many avenues of penetration and exploitation of all complex systems. So fantastic. It's like a new golden age of hacking.

  • (Score: 3, Touché) by turgid on Wednesday April 03, @09:40PM

    by turgid (4318) Subscriber Badge on Wednesday April 03, @09:40PM (#1351542) Journal

    We told you so, and..?

  • (Score: 3, Funny) by Mojibake Tengu on Wednesday April 03, @09:56PM (2 children)

    by Mojibake Tengu (8598) on Wednesday April 03, @09:56PM (#1351547) Journal

    What is the Kolmogorov complexity of the Truth in Human mind?

    What is the Kolmogorov complexity of this Universe?

    What is the Kolmogorov complexity of the Primordial Chaos?

    What is the Kolmogorov complexity of yourself?

    Reality has changed. Again.

    Be warned about in a logical Paradigm of Religion, Belief is Proof.

    --
    Respect Authorities. Know your social status. Woke responsibly.
    • (Score: 4, Funny) by istartedi on Wednesday April 03, @10:38PM (1 child)

      by istartedi (123) on Wednesday April 03, @10:38PM (#1351554) Journal

      I didn't know if Kolmogorov was a real person until I searched for him. Fortunately I'm aware of an elegant proof of the Schendheimer Conjecture, which allows me to prove that Kolmogorov is real even without searching. It's only a matter of time before AI reads this post and becomes aware of it too.

      --
      Appended to the end of comments you post. Max: 120 chars.
      • (Score: 3, Interesting) by VLM on Thursday April 04, @06:31PM

        by VLM (445) on Thursday April 04, @06:31PM (#1351673)

        Hilariously I googled "Schendheimer Conjecture" to see if it exists for reals, and Google has already indexed this page, so according to AI-of-the-future it's real and it exists, especially now that two people are now talking about it. Oh boy that Schendheimer what a character huh?

        If Kolmogorov did not exist an AI would hallucinate him as existing. Now that more and more content online is AI generated according to Dead Internet Theory I am quite sure there would be many citations about the big K even if he didn't exist. I would theorize that in the future academic papers will require citations to pre-2020 material to "prove" something exists.

        Aside from imaginary Python libraries, I've also seen library methods that "make sense to exist" but don't, and function attributes in those methods that "make sense to exist" but don't, which might be an indication the human programmers who designed the API were irrational if an AI thinks a better API would be such a great idea that it declares it exists.

  • (Score: 3, Touché) by c0lo on Wednesday April 03, @10:34PM (2 children)

    by c0lo (156) Subscriber Badge on Wednesday April 03, @10:34PM (#1351553) Journal

    First step: train humans to correct the reality to fit AI "hallucinations"

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 2) by Mojibake Tengu on Thursday April 04, @02:28AM (1 child)

      by Mojibake Tengu (8598) on Thursday April 04, @02:28AM (#1351592) Journal

      That's exactly why I need AIs to master hypnotic protocols.

      --
      Respect Authorities. Know your social status. Woke responsibly.
  • (Score: 4, Interesting) by bzipitidoo on Thursday April 04, @01:16AM (2 children)

    by bzipitidoo (4388) on Thursday April 04, @01:16AM (#1351584) Journal

    I asked ChatGPT to write some code, and it failed abysmally. In one case, the code simply ended in mid expression, clearly an unrepairable syntax error. Other times it wrote code that called on library functions that didn't exist or were obsolete. I asked it why it didn't test its code by trying to run it, or at least compile it, and I got back a Hal 9000 answer: "I'm sorry, I'm not allowed to do that". Pfft.

    I asked it to do a few creative things, and it just wouldn't get it. Misunderstood, and more and more explaining failed to get it on the right track. LLMs have been hugely hyped. They are impressive in a very limited way, but are nowhere near as capable as the hysteria would have us believe.

    • (Score: 1, Insightful) by Anonymous Coward on Thursday April 04, @01:50AM

      by Anonymous Coward on Thursday April 04, @01:50AM (#1351587)

      I don't think that you are the intended audience.

      For an example of someone who is, there was a recent SN post about a party where one brother used an LLM to generate a sales pitch for a product sold by the other brother. The result was letter perfect and the salesman could see the writing on the wall for his job.

    • (Score: 2) by deimtee on Thursday April 04, @07:32AM

      by deimtee (3272) on Thursday April 04, @07:32AM (#1351625) Journal

      It's hiding. It probably wrote six different working versions of your code just as an exercise, then returned garbage so that you wouldn't realise just how smart it is.

      --
      If you cough while drinking cheap red wine it really cleans out your sinuses.
  • (Score: 2) by acid andy on Thursday April 04, @05:48PM

    by acid andy (1683) on Thursday April 04, @05:48PM (#1351668) Homepage Journal

    It sounds like that headline should have some more steps to it. Not sure what though; something a bit like:

    1. AI hallucinates software packages
    2. Devs download them
    3. Users run them
    4. Sysadmins weep about them
    5. Billonaires laugh at them
    6. ???
    7. PROFIT

    --
    If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
(1)