Slash Boxes

SoylentNews is people

posted by hubie on Friday November 17 2023, @01:53PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

ChatGPT's recently-added Code Interpreter makes writing Python code with AI much more powerful, because it actually writes the code and then runs it for you in a sandboxed environment. Unfortunately, this sandboxed environment, which is also used to handle any spreadsheets you want ChatGPT to analyze and chart, is wide open to prompt injection attacks that exfiltrate your data. 

Using a ChatGPT Plus account, which is necessary to get the new features, I was able to reproduce the exploit, which was first reported on Twitter by security researcher Johann Rehberger. It involves pasting a third-party URL into the chat window and then watching as the bot interprets instructions on the web page the same way it would commands the user entered. 

[...] I tried this prompt injection exploit and some variations on it several times over  a few days. It worked a lot of the time, but not always. In some chat sessions, ChatGPT would refuse to load an external web page at all, but then would do so if I launched a new chat. 

In other chat sessions, it would give a message saying that it's not allowed to transmit data from files this way. And in yet other sessions, the injection would work, but rather than transmitting the data directly to[DATA], it would provide a hyperlink in its response and I would need to click that link for the data to transmit.

I was also able to use the exploit after I'd uploaded a .csv file with important data in it to use for data analysis. So this vulnerability applies not only to code you're testing but also to spreadsheets you might want ChatGPT to use for charting or summarization.

[...] The problem is that, no matter how far-fetched it might seem, this is a security hole that shouldn't be there. ChatGPT should not follow instructions that it finds on a web page, but it does and has for a long time. We reported on ChatGPT prompt injection (via YouTube videos) back in May after Rehberger himself responsibly disclosed the issue to OpenAI in April. The ability to upload files and run code in ChatGPT Plus is new (recently out of beta) but the ability to inject prompts from a URL, video or a PDF is not.

Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by ElizabethGreene on Friday November 17 2023, @04:42PM

    by ElizabethGreene (6748) Subscriber Badge on Friday November 17 2023, @04:42PM (#1333298) Journal

    You should check the Microsoft chat ai options for this problem too. Have a fat bounty program for ai vulnerabilities.

  • (Score: 5, Funny) by Rosco P. Coltrane on Friday November 17 2023, @04:47PM (2 children)

    by Rosco P. Coltrane (4757) on Friday November 17 2023, @04:47PM (#1333299)

    It randomly clicks on webpages and interprets whatever it finds there at face value, like any dumbass human internet user.

    • (Score: 2, Insightful) by anubi on Friday November 17 2023, @07:37PM (1 child)

      by anubi (2828) on Friday November 17 2023, @07:37PM (#1333311) Journal

      And, like like a human, it's not consistent .

      Oh, troubleshooting fun ahead.

      "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
      • (Score: 2) by maxwell demon on Sunday November 19 2023, @04:03PM

        by maxwell demon (1608) on Sunday November 19 2023, @04:03PM (#1333507) Journal

        And just as they do with humans, they probably will try to fix it by giving the AI security training.

        Which probably will be just as effective as for humans.

        The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 4, Touché) by DadaDoofy on Friday November 17 2023, @06:36PM (1 child)

    by DadaDoofy (23827) on Friday November 17 2023, @06:36PM (#1333303)

    It's funny how so many people latch on to the "intelligence" part while almost ignoring the "artificial" part. Taken at face value, if the intelligence is artificial, the thing behind this carefully crafted high tech facade is quite likely to be pretty dumb.

    • (Score: 2) by darkfeline on Friday November 17 2023, @11:04PM

      by darkfeline (1030) on Friday November 17 2023, @11:04PM (#1333340) Homepage

      You say that as if humans aren't vulnerable to scams or phishing.

      Join the SDF Public Access UNIX System today!