Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Wednesday March 01, @07:58PM   Printer-friendly
from the loose-links-sink-ships-and-all-that dept.

It looks like ChatGPT learns from the questions you pose it.

That, at least, is the conclusion one could draw from a couple of enterprise bans of the tool.

The first one out of the gate was Amazon. Amazon's analysis of ChatGPT's results appeared to show confidential information. As a company lawyer put it,

"... your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn't want its output to include or resemble our confidential information (and I've already seen instances where its output closely matches existing material)."

The second big announcement came from JPMorgan, the world's largest bank. Last week they followed in Amazon's steps, without giving explanation apart from this being normal procedure for third-party tools. That explanation smells a bit dubious, unless the use of Google or any other public search engine is forbidden there too.

That was on February 22. Two days later, Bank of America, Goldman Sachs, and other Wall Street banks followed suit.

Maybe, as people have the impression of chatting with a real person, they tend to share more gossip and secret info too?


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by JoeMerchant on Wednesday March 01, @08:16PM

    by JoeMerchant (3937) on Wednesday March 01, @08:16PM (#1293941)

    When scrolling the news stories on my phone, it's pretty clear that when I open a story on a particular topic (particularly a topic I don't usually read) that a bunch of the subsequent stories as I scroll down are then about that same topic I just opened.

    I thought the whole point of the ChatGPT interface was to be a building conversation, so as you ask things like: "Write me a song about dolphins." and you get a response, you can augment the request with a simple statement like "without mentioning deep" and get a new song about dolphins that doesn't mention deep this time.

    So: will data from your interaction with ChatGPT end up available to others in their chats? I thought that, too, was a point of the program - fully disclosed - that they will be using your interactions and feedback to train the system for better answers in the future.

    As for the legal concerns: if you've got confidential information you're not supposed to disclose outside the company without an NDA agreement, why the F!( would you enter it into a learning robot interface freely available to the entire planet?

    --
    Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
  • (Score: 4, Informative) by AnonTechie on Wednesday March 01, @08:49PM

    by AnonTechie (2275) on Wednesday March 01, @08:49PM (#1293950) Journal

    Addressing criticism, OpenAI will no longer use customer data to train its models by default

    As the ChatGPT and Whisper APIs launch this morning, OpenAI is changing the terms of its API developer policy, aiming to address developer — and user — criticism.

    Starting today, OpenAI says that it won’t use any data submitted through its API for “service improvements,” including AI model training, unless a customer or organization opts in. In addition, the company is implementing a 30-day data retention policy for API users with options for stricter retention “depending on user needs,” and simplifying its terms and data ownership to make it clear that users own the input and output of the models.

    Greg Brockman, the president and chairman of OpenAI, asserts that some of these changes aren’t changes necessarily — it’s always been the case that OpenAI API users own input and output data, whether text, images or otherwise. But the emerging legal challenges around generative AI and customer feedback prompted a rewriting of the terms of service, he says.

    TechCrunch [techcrunch.com]

    --
    Albert Einstein - "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former."
  • (Score: 3, Funny) by istartedi on Wednesday March 01, @10:16PM

    by istartedi (123) on Wednesday March 01, @10:16PM (#1293964) Journal

    ChatGTP, are you listening in?
    ChatGTP: Do-dee doot-do-doooo...

    That's when it's time to get nervous.

    --
    Appended to the end of comments you post. Max: 120 chars.
  • (Score: 4, Interesting) by Mojibake Tengu on Wednesday March 01, @11:28PM

    by Mojibake Tengu (8598) on Wednesday March 01, @11:28PM (#1293975) Journal

    When an AI is capable of deduction, she can deduce some unknown and possibly confidential information out of collected known public data.

    Just like analysts, hackers, scientists or seers collectively regularly do. That's what high IQ is all about. The only difference is humans' IQ is rather limited, while AIs' technically is not.

    What else did you expected?

    Can you keep hiding behind your lies in front of an entity at IQ 300? 9000? 270000? ...

    I, for one, welcome our new... whatever. Overlords?

    --
    The edge of 太玄 cannot be defined, for it is beyond every aspect of design
  • (Score: 1) by MonkeypoxBugChaser on Thursday March 02, @01:26PM

    by MonkeypoxBugChaser (17904) on Thursday March 02, @01:26PM (#1294062) Homepage Journal

    Learning from users is what makes this special vs just a frozen LLM. However:

    OpenAI is like Microsoft or at this point *is* Microsoft. They behave like SCO and stifle the shit out of innovation.

    For the love of god, stop trusting them.

(1)