from the at-least-when-it-blabs-your-secrets-it-will-probably-get-them-wrong dept.
Samsung bans use of generative AI tools like ChatGPT after April internal data leak:
A month after internal, sensitive data from Samsung was accidentally leaked to ChatGPT, Samsung is cracking down on usage of the generative AI service. The electronics giant is planning a temporary block of the use of generative AI tools on company-owned devices, covering computers, tablets, phones, as well as non-company-owned devices running on internal networks. The ban would cover not just ChatGPT, but services that use the technology like Microsoft's Bing, as well as competing generative AI services like Bard from Google.
[...] According to a memo from Monday seen by Bloomberg, the restriction would be temporary, lasting until it builds "security measures to create a secure environment for safely using generative AI to enhance employees' productivity and efficiency." The South Korea-headquartered tech firm is said to be developing its own in-house AI tools for "software development and translation," according to the report.
[...] The tech giant initially allowed employees at its device solutions (DS) division, which manages its semiconductor and display businesses, to use generative AI from March 11. In the aftermath of the data leak, Samsung also asked staff using generative AI tools elsewhere "not to submit any company-related information or personal data," which could disclose its intellectual property, by the memo reviewed by Bloomberg.
One of the issues that Samsung noted that it is difficult to "retrieve and delete" the data on external servers, and the data transmitted to such AI tools could be disclosed to other users. Based on Samsung's internal survey in April, about 65% of participants said using generative AI tools carries a security risk.
(Score: 2) by DannyB on Wednesday May 10, @02:20PM (2 children)
Hypothetical.
Lettuce suppose you take a file, such as an MP3 file, and express it as hex digits. Now feed those to Chat GPT to learn. Others on the intarweb tubes retrieve these hex codes by asking for them. Very sophisticated evil hacker programmers as young as ten years old could develop illegal tools that convert these hex digits back into binary files. Now you have a way to mass distribute files, such as MP3's.
It is not well understood exactly where or how the information is stored in the neural net. Can it even be deleted? (or un-trained?)
Next, some much more sophisticated evil hacker would figure out how to make tiny alterations to an MP3 such that
However these alterations would change the hex digits enough to defeat an upload filter that is based on the file expressed in Hex.
How often should I have my memory checked? I used to know but...
(Score: 0) by Anonymous Coward on Wednesday May 10, @08:17PM
It would be very easy to defeat. To the extent that ChatGPT is retrainable without overhauling the underlying LLM, OpenAI or Microsoft could block data that is not written in grammatical natural language from becoming feed for the machine.
Then if you wanted to retrieve this supposedly stored data, it would likely produce pieces out of order or hallucinated, rendering the file unusable.
The Web already has many simple ways to illegally distribute files. A Rube Goldberg machine based on ChatGPT is unneeded and unwanted.
(Score: 3, Interesting) by takyon on Thursday May 11, @11:16AM
https://www.tomshardware.com/news/security-researcher-finds-coldplay-lyrics-in-kingston-ssd-firmware [tomshardware.com]
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]