New AI fake text generator may be too dangerous to release, say creators
The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed "deepfakes for text" – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.
OpenAI, an nonprofit research company backed by Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.
At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.
When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.
More like ClosedAI or OpenAIEEEEEE.
Related: OpenAI 'Universe' Platform Provides Means to Link Image Recognition Vehicular AI Agents Into GTA 5
The OpenAI Dota 2 Bots Defeated a Team of Former Pros
(Score: 2) by DannyB on Friday February 15 2019, @03:45PM (4 children)
Retrain it to write code instead of human language text.
See if it could do better at passing interviews than most posers who apply.
For posers who have already been hired, instead of googling for code on the internet and pasting it into your corporate project and committing it, you could use this AI to generate code that at least compiles. And if it compiles, it's all good.
The lower I set my standards the more accomplishments I have.
(Score: 2) by takyon on Friday February 15 2019, @03:50PM (2 children)
Feed existing code into algorithm. Then feed the resulting code into the Mozilla-Ubisoft bug checking algorithm [soylentnews.org]. Then throw all developers into the Great Pit of Carkoon.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 3, Touché) by JoeMerchant on Friday February 15 2019, @04:22PM
I thought developers already lived in the Great Pit of Carkoon, with Red-Bull on-tap and Nintendo in the break room.
You can have your standing desk, but it won't elevate your position in the hierarchy.
🌻🌻 [google.com]
(Score: 2) by DannyB on Friday February 15 2019, @04:47PM
> Feed existing code into algorithm.
Only feed it delicious nutritious non-Microsoft code.
Use the bug checking algorithm to reinforce the AI about what is good and bad code.
The lower I set my standards the more accomplishments I have.
(Score: 2) by JoeMerchant on Friday February 15 2019, @04:00PM
If you can control an AI like this to perform valuable work, that in itself is a valuable skill.
I myself wrote a code generator that is used by our project team - as they develop new inter-module interfaces, my code generator makes sure that the interfaces they define are implemented consistently, reliably, and completely. It was a lot of work to develop that code generator, but less work than running around behind the developers attempting to implement thousands of interface details between their modules - particularly as the project approaches "crunch time," they are free to add or re-define interfaces with confidence that I will "do my part" of the interface implementation predictably, reliably, and very very quickly.
🌻🌻 [google.com]