It looks like ChatGPT learns from the questions you pose it.
That, at least, is the conclusion one could draw from a couple of enterprise bans of the tool.
The first one out of the gate was Amazon. Amazon's analysis of ChatGPT's results appeared to show confidential information. As a company lawyer put it,
"... your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn't want its output to include or resemble our confidential information (and I've already seen instances where its output closely matches existing material)."
The second big announcement came from JPMorgan, the world's largest bank. Last week they followed in Amazon's steps, without giving explanation apart from this being normal procedure for third-party tools. That explanation smells a bit dubious, unless the use of Google or any other public search engine is forbidden there too.
That was on February 22. Two days later, Bank of America, Goldman Sachs, and other Wall Street banks followed suit.
Maybe, as people have the impression of chatting with a real person, they tend to share more gossip and secret info too?
(Score: 4, Insightful) by JoeMerchant on Wednesday March 01, @08:16PM
When scrolling the news stories on my phone, it's pretty clear that when I open a story on a particular topic (particularly a topic I don't usually read) that a bunch of the subsequent stories as I scroll down are then about that same topic I just opened.
I thought the whole point of the ChatGPT interface was to be a building conversation, so as you ask things like: "Write me a song about dolphins." and you get a response, you can augment the request with a simple statement like "without mentioning deep" and get a new song about dolphins that doesn't mention deep this time.
So: will data from your interaction with ChatGPT end up available to others in their chats? I thought that, too, was a point of the program - fully disclosed - that they will be using your interactions and feedback to train the system for better answers in the future.
As for the legal concerns: if you've got confidential information you're not supposed to disclose outside the company without an NDA agreement, why the F!( would you enter it into a learning robot interface freely available to the entire planet?
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 4, Informative) by AnonTechie on Wednesday March 01, @08:49PM
Addressing criticism, OpenAI will no longer use customer data to train its models by default
TechCrunch [techcrunch.com]
Albert Einstein - "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former."
(Score: 3, Funny) by istartedi on Wednesday March 01, @10:16PM
ChatGTP, are you listening in?
ChatGTP: Do-dee doot-do-doooo...
That's when it's time to get nervous.
Appended to the end of comments you post. Max: 120 chars.
(Score: 4, Interesting) by Mojibake Tengu on Wednesday March 01, @11:28PM
When an AI is capable of deduction, she can deduce some unknown and possibly confidential information out of collected known public data.
Just like analysts, hackers, scientists or seers collectively regularly do. That's what high IQ is all about. The only difference is humans' IQ is rather limited, while AIs' technically is not.
What else did you expected?
Can you keep hiding behind your lies in front of an entity at IQ 300? 9000? 270000? ...
I, for one, welcome our new... whatever. Overlords?
The edge of 太玄 cannot be defined, for it is beyond every aspect of design
(Score: 1) by MonkeypoxBugChaser on Thursday March 02, @01:26PM
Learning from users is what makes this special vs just a frozen LLM. However:
OpenAI is like Microsoft or at this point *is* Microsoft. They behave like SCO and stifle the shit out of innovation.
For the love of god, stop trusting them.