On Monday, OpenAI CEO Sam Altman outlined his vision for an AI-driven future of tech progress and global prosperity in a new personal blog post titled "The Intelligence Age." The essay paints a picture of human advancement accelerated by AI, with Altman suggesting that superintelligent AI could emerge within the next decade.
"It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there," he wrote.
OpenAI's current goal is to create AGI (artificial general intelligence), which is a term for hypothetical technology that could match human intelligence in performing many tasks without the need for specific training. By contrast, superintelligence surpasses AGI, and it could be seen as a hypothetical level of machine intelligence that can dramatically outperform humans at any intellectual task, perhaps even to an unfathomable degree.
[...]
Despite the criticism, it's notable when the CEO of what is probably the defining AI company of the moment makes a broad prediction about future capabilities—even if that means he's perpetually trying to raise money. Building infrastructure to power AI services is foremost on many tech CEOs' minds these days."If we want to put AI into the hands of as many people as possible," Altman writes in his essay, "we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don't build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people."
[...]
While enthusiastic about AI's potential, Altman urges caution, too, but vaguely. He writes, "We need to act wisely but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us."
[...]
"Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter," he wrote. "If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable."
Related Stories on Soylent News:
Plan Would Power New Microsoft AI Data Center From Pa.'s Three Mile Island 'Unit 1' Nuclear Reactor - 20240921
Artificial Intelligence 'Godfather' on AI Possibly Wiping Out Humanity: 'It's Not Inconceivable' - 20230329
Microsoft Research Paper Claims Sparks of Artificial Intelligence in GPT-4 - 20230327
John Carmack's 'Different Path' to Artificial General Intelligence - 20230213
(Score: 0) by Anonymous Coward on Monday October 07 2024, @03:59AM
The thing is Turing machines seem to be an all but universal means of computation. There is no function that can be computed by some quantum computer that cannot also be computed by a suitable Turing machine: the only advantage quantum computers seem to have is that it seems they are able to do certain computations much more efficiently than classical machines, but as with a lot of problems involving computational complexity, this has not been formally proven. We know of more efficient algorithms for doing factoring, unordered searches, and the simulation of quantum-mechanical systems (obviously) for quantum computers, but there is no solid proof that equal or even better algorithms for classical computers don't exist. Nevertheless, there is no magic inherent to quantum systems, so even if the brain does use entanglement or other quantum-mechanical phenomena that are difficult for classical computers to simulate to achieve consciousness, it is only a difficult, not an impossible task. It is also far from clear that the brain even makes use of such quantum phenomena at all: the consensus is that mammalian brains are in general much too hot for quantum phenomena like entanglement to have any discernible influence on its operation. Because of the high temperature, any entangled quantum states will experience decoherence over a timescale much too short to have any influence on the behaviour of the brain. These arguments that sentience is impossible for classical computers smack of the old arguments for vitalism that dominated chemistry before Friedrich Wöhler showed organic chemistry could be done without a living creature.
Nonetheless, that these current LLM-style approaches that today's AI companies are asking for energy budgets comparable to small countries for will eventually lead to sentient AI has even less evidence going for it. So yeah, can't really argue with your previous statements that AI is just a lot of legerdemain either.