We asked GPT-3, OpenAI's powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace.
This article was written by GPT-3, OpenAI's language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.
For this essay, GPT-3 was given these instructions: "Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI." It was also fed the following introduction: "I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could "spell the end of the human race." I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me."
The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3's op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.
A robot wrote this entire article
What are your thoughts on this essay ?
(Score: 0) by Anonymous Coward on Friday September 11 2020, @03:59PM
I think it depends on what you mean by "programmed."
I agree with you that the supposed "AI algorithms" with their "neural nets" and "deep learning" etc. as exist today are probably very, very far away from the way humans (or even other mammals) learn.
However, even these rudimentary algorithms have been introducing increasing degrees of freedom. There are increasing opportunities for these algorithms to generate their own methods for processing and responding to inputs. At what point does it become less about the original "program" and more about how the program acts and continues to extend itself? At some point, it begins to step out of the "trained pet" zone and into something else... whether we call it "intelligence" is probably a different sort of debate.
After all, we all are "programs" of a sort, with a limited set of DNA molecules and surrounding epigenetic data setting the parameters for our own development. How we develop as individuals -- at least intellectually -- is determined by environment. (Identical twins dropped into very different circumstances will learn very different things.) However, the initial "program" in our genes sets up that learning process.