On Monday, OpenAI CEO Sam Altman outlined his vision for an AI-driven future of tech progress and global prosperity in a new personal blog post titled "The Intelligence Age." The essay paints a picture of human advancement accelerated by AI, with Altman suggesting that superintelligent AI could emerge within the next decade.
"It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there," he wrote.
OpenAI's current goal is to create AGI (artificial general intelligence), which is a term for hypothetical technology that could match human intelligence in performing many tasks without the need for specific training. By contrast, superintelligence surpasses AGI, and it could be seen as a hypothetical level of machine intelligence that can dramatically outperform humans at any intellectual task, perhaps even to an unfathomable degree.
[...]
Despite the criticism, it's notable when the CEO of what is probably the defining AI company of the moment makes a broad prediction about future capabilities—even if that means he's perpetually trying to raise money. Building infrastructure to power AI services is foremost on many tech CEOs' minds these days."If we want to put AI into the hands of as many people as possible," Altman writes in his essay, "we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don't build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people."
[...]
While enthusiastic about AI's potential, Altman urges caution, too, but vaguely. He writes, "We need to act wisely but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us."
[...]
"Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter," he wrote. "If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable."
Related Stories on Soylent News:
Plan Would Power New Microsoft AI Data Center From Pa.'s Three Mile Island 'Unit 1' Nuclear Reactor - 20240921
Artificial Intelligence 'Godfather' on AI Possibly Wiping Out Humanity: 'It's Not Inconceivable' - 20230329
Microsoft Research Paper Claims Sparks of Artificial Intelligence in GPT-4 - 20230327
John Carmack's 'Different Path' to Artificial General Intelligence - 20230213
« New “Ghost Shark” Species Lurks in Deep Seas of Australia and New Zealand | Antarctica’s 'Doomsday' Glacier Is Heading For Catastrophic Collapse »
Related Stories
Carmack sees a 60% chance of achieving initial success in AGI by 2030:
[Ed note: This interview is chopped way down to fit here. Lots of good stuff in the full interview. --hubie]
Inside his multimillion-dollar manse on Highland Park's Beverly Drive, Carmack, 52, is working to achieve AGI through his startup Keen Technologies, which raised $20 million in a financing round in August from investors including Austin-based Capital Factory.
This is the "fourth major phase" of his career, Carmack says, following stints in computers and pioneering video games with Mesquite's id Software (founded in 1991), suborbital space rocketry at Mesquite-based Armadillo Aerospace (2000-2013), and virtual reality with Oculus VR, which Facebook (now Meta) acquired for $2 billion in 2014. Carmack stepped away from Oculus' CTO role in late 2019 to become consulting CTO for the VR venture, proclaiming his intention to focus on AGI. He left Meta in December to concentrate full-time on Keen.
Many are predicting stupendous, earth-shattering things will result from this, right?
I'm trying not to use the kind of hyperbole of really grand pronouncements, because I am a nuts-and-bolts person. Even with the rocketry stuff, I wasn't talking about colonizing Mars, I was talking about which bolts I'm using to hold things together. So, I don't want to do a TED talk going on and on about all the things that might be possible with plausibly cost-effective artificial general intelligence.
[...] You'll find people who can wax rhapsodic about the singularity and how everything is going to change with AGI. But if I just look at it and say, if 10 years from now, we have 'universal remote employees' that are artificial general intelligences, run on clouds, and people can just dial up and say, 'I want five Franks today and 10 Amys, and we're going to deploy them on these jobs,' and you could just spin up like you can cloud-access computing resources, if you could cloud-access essentially artificial human resources for things like that—that's the most prosaic, mundane, most banal use of something like this.
If all we're doing is making more human-level capital and applying it to the things that we're already doing today, while you could say, 'I want to make a movie or a comic book or something like that, give me the team that I need to go do that,' and then run it on the cloud—that's kind of my vision for it.
Microsoft Research has issued a 154-page report entitled Sparks of Artificial Intelligence: Early Experiments With GPT-4:
Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.
Zvi Mowshowitz wrote a post about this article:
[...] Their method seems to largely be 'look at all these tasks GPT-4 did well on.'
I am not sure why they are so impressed by the particular tasks they start with. The first was 'prove there are an infinite number of primes in the form of a rhyming poem.' That seems like a clear case where the proof is very much in the training data many times, so you're asking it to translate text into a rhyming poem, which is easy for it - for a challenge, try to get it to write a poem that doesn't rhyme.
[...] As I understand it, failure to properly deal with negations is a common issue, so reversals being a problem also makes sense. I love the example on page 50, where GPT-4 actively calls out as an error that a reverse function is reversed.
[...] in 6.1, GPT-4 is then shown to have theory of mind, be able to process non-trivial human interactions, and strategize about how to convince people to get the Covid-19 vaccine far better than our government and public health authorities handled things. The rank order is clearly GPT-4's answer is very good, ChatGPT's answer is not bad, and the actual answers we used were terrible.
[...] Does this all add up to a proto-AGI? Is it actually intelligent? Does it show 'sparks' of general intelligence, as the paper words it?
Geoffrey Hinton, a computer scientist who has been called "the godfather of artificial intelligence", says it is "not inconceivable" that AI may develop to the point where it poses a threat to humanity:
The computer scientist sat down with CBS News this week about his predictions for the advancement of AI. He compared the invention of AI to electricity or the wheel.
Hinton, who works at Google and the University of Toronto, said that the development of general purpose AI is progressing sooner than people may imagine. General purpose AI is artificial intelligence with several intended and unintended purposes, including speech recognition, answering questions and translation.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less," Hinton predicted. Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say."
[...] Hinton said it was plausible for computers to eventually gain the ability to create ideas to improve themselves.
Also at CBS News. Originally spotted on The Eponymous Pickle.
Previously: OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of
One of the two nuclear reactors at Three Mile Island, the Pennsylvania site of a notorious partial meltdown 45 years ago, could be brought back online in the coming years to provide power to a new Microsoft artificial intelligence data center, officials said Friday.
Constellation Energy, the Baltimore-based provider that spun off Exelon two years ago, has signed a 20-year power purchasing agreement with the tech giant to draw electricity generated at the plant along the Susquehanna River outside Harrisburg and about 85 miles west of Philadelphia.
Pending regulatory approvals, the newly created Crane Clean Energy Center would become the first nuclear plant in the United States to return to service after being shut down.
The $1.6 billion project will restart Three Mile Island Unit 1, which stopped generating power five years ago because it could not compete with cheaper energy being produced by Pennsylvania's natural gas industry. The reactor can be run independently from Unit 2, where the plant's partial meltdown occurred resulting in the worst nuclear accident in U.S. history on March 28, 1979. That reactor is still in the process of being decommissioned by owner Energy Solutions.
"Before it was prematurely shuttered due to poor economics, this plant was among the safest and most reliable nuclear plants on the grid, and we look forward to bringing it back with a new name and a renewed mission to serve as an economic engine for Pennsylvania," Joe Dominguez, president and CEO of Constellation, said in a statement.
[...] In the race to develop artificial intelligence applications, tech companies are scrambling to build data centers, which require enormous amounts of electricity to operate. Such facilities are forecast to make up a growing share of the nation's electricity use in the years to come, prompting companies to look at tapping into existing infrastructure to help meet their needs.
Nuclear power is being touted as a cost-effective solution for these data centers that also limits reliance on carbon-producing power sources. Building and directly connecting data centers to nuclear plants is known as co-location, a strategy that industry leaders favor because it's cheaper and faster to do. Proponents also claim it reduces stress on the transmission grids.
During the years the 837-megawatt unit operated at Three Mile Island, the reactor powered about 830,000 homes and businesses. Constellation officials did not say how much of the reactor's power-producing capacity would be dedicated to powering Microsoft's AI data center, but it's not uncommon for such facilities to have energy demands of 1,000 megawatts – or 1 gigawatt.
An economic impact study commissioned by the Pennsylvania Building & Construction Trades Council estimates the restart of Three Mile Island would create 3,400 jobs directly and indirectly related to the plant and generate about $3 billion in state and federal tax revenue.
(Score: 3, Interesting) by krishnoid on Sunday September 29, @10:50PM (4 children)
And if we do build enough, we'll have the option of fighting with [youtu.be]. Both good choices.
(Score: 4, Interesting) by Unixnut on Sunday September 29, @11:27PM (2 children)
I do suspect that if AGI reaches the point where it can be made into an automated killing machine (*), I think it more likely that rather a single AGI "going rogue" and deciding to wipe out humanity, each major rich nation in the world will have its own AI, configured to destroy the others in a conflict.
It would be ironic if after years of science fiction narcissistically telling us that AI would want to kill or enslave us all, what would actually happen is that our extinction event ends up being just collateral damage when large robotic armies work to destroy each other.
Beyond that there is no real way to outcomplete those with money and power from having the best AI. As a technology it is very capital intensive and very centralised by design. You have to build huge GPU clusters with fast interconnects, feed those power hungry GPUs and have entire teams tasked with sanitising and structuring the input data in such a way as the model can be correctly trained to your needs. So those with the most money/power will naturally end up with the best models and most powerful AI systems.
(*) If I am honest warfare is not only the most realistic future direction it also likely to be the first major application for AGI. You don't need a superintelligent AGI for robot soldiers. In fact you don't want it too smart. Smart enough to obey orders and understand instructions, but not smart enough to understand the implications of those orders, a guess an approximate intelligence level of an 8-10 year old would suffice.
(Score: 2) by krishnoid on Sunday September 29, @11:35PM
I can't help but feel that AI destroying humanity will be less a matter of kill all humans [youtu.be] and more "Wait, I think I stepped on something."
(Score: 2) by krishnoid on Monday September 30, @01:03AM
If you put it that way, an AGI that can graduate from boot camp would be an unambiguous milestone.
(Score: 2) by JoeMerchant on Monday September 30, @12:38AM
The infrastructure is beach sand cheap, there's nothing inherently rare in the process, the main limiter is power (thus the reactivation of Three Mile Island...)
Meanwhile: https://futurism.com/the-byte/google-paid-billion-single-ai-researcher-back [futurism.com]
It would seem that figuring certain aspects out (or finding them like a blind squirrel does an acorn), is still quite valuable in the race to some arbitrary milestone that the business users are treating like the Holy Grail.
🌻🌻 [google.com]
(Score: 4, Insightful) by corey on Sunday September 29, @11:11PM (13 children)
Oh yay. Sounds like a world I want to live in.
Nothing I can do to stop these tech visionary psychopaths charging forward with this stuff.
(Score: 5, Insightful) by corey on Sunday September 29, @11:22PM (12 children)
Ah forgot to say, I love it (not) when these guys gaslight people’s jobs of the past. Maybe the lamplighters were happy? Maybe they loved their jobs and gave them pride. Maybe more than the probably disconnected, isolated jobs of today he’s referring to. And why is it worse anyway? What are humans here to actually do? Why do we need to “progress” with AI?
We’re not here to do anything or achieve anything. Existence is just that. We’re just ants walking around, doing stuff. All there really is, is for people to be happy. I’m not sure people around the world collectively are more happy now than they were when they were lamplighters. And that brings me to my other point. Sam and his “visionary” AI knuckleheads in Silicon Valley endlessly want more compute. That means energy and rare earths. And that means more pollution, more waste and suffering for those mining it. Progress. Super intelligent AI, whoopee, but the biosphere is dying from greenhouse gases as a result. Well, i bet if we asked the magic super AI how to fix global warming, it would say get rid of the humans.
(Score: 4, Insightful) by Unixnut on Sunday September 29, @11:38PM (4 children)
It is funny to think how much they were screeching about the resources used in mining Bitcoin. I remember all the graphs showing how many countries worth of energy the grid consumed in order to keep the blockchain secure, so much whining about the destruction of the environment, etc...
Yes here we have a technology that will be just as power hungry, if not more. Yet we hear barely a peep about its carbon footprint, all the natural space ruined to fit massive concrete datacentres to house all this, nor to mention all the electricity required to run it 24/7 while it develops ML models. It could well eclipse the crypto mining costs, yet you barely hear anything about it.
I guess the powers that be think AI will be useful to them, probably as a tool of mass surveillance and control, so suddenly the environment no longer matters. Proof that they trot out the environment card when they don't like something (usually because it reduces their power) in order to discredit it.
(Score: 5, Interesting) by stormwyrm on Monday September 30, @04:13AM (3 children)
Numquam ponenda est pluralitas sine necessitate.
(Score: 1) by khallow on Monday September 30, @12:29PM (2 children)
(Score: 0) by Anonymous Coward on Tuesday October 01, @03:46AM (1 child)
(Score: 1) by khallow on Tuesday October 01, @08:32AM
It's not my optimism. My point here is that even such relatively high energy consumption can be rather easily justified by what you do with it. Here, it doesn't make sense to complain merely because it's more energy consumption than the country of New Zealand.
(Score: 4, Insightful) by JoeMerchant on Monday September 30, @01:27AM
>What are humans here to actually do?
I'll go with "Buddhism sees this desire to see others become happy as the highest, most noble aspect of the human heart." https://buddhability.org/purpose/caring-for-others-is-caring-for-ourselves/ [buddhability.org]
Tenzin Gyatso, the 14th Dalai Lama, often extended this sentiment pragmatically: "at least try not to hurt others too much along your path."
Inasmuch as AI can relieve cube farm dwellers of their Kafkaesque servitude, that is a good thing, as long as they aren't being significantly hurt in the transition to post AI society.
🌻🌻 [google.com]
(Score: 3, Interesting) by RS3 on Monday September 30, @02:33AM (2 children)
Made me think of something I heard on the news tonight: the looming dockworkers' strike in the US- one of their demands is "a ban on automated cranes, gates and trucks." Not sure what really defines "AI" but the automation they're fighting isn't far off, and maybe would use some "AI" in controls.
I see the actual problem in a different light. More automation / AI means less human work hours for the same product / service output. But in my ideal world, maybe we could all work fewer hours and everything would even out.
A couple of years ago I was doing some gig work (electrical contracting) and another guy on the jobsite was from Germany. Rightly so he was extolling the wonders and virtues of Germany. So I asked him, if Germany is so great, why are you in the US? He said: "they won't let me work as many hours as I want to".
I haven't done any more research, but maybe they understand economics better, and understand that if someone wants to work 100 hours a week, 2 or 3 other people might have no work. Yes, aforementioned guy was a bit crazy and didn't seem to need sleep. He would work well over 100 hours a week, in addition to owning several rental properties he cared for, and was renovating another.
(Score: 1) by khallow on Monday September 30, @12:53PM
Human work is not a conserved quantity. It's very easy for that 100 hour per week guy to generate jobs for several other people. The economically ignorant call them "bullshit jobs" [soylentnews.org], but it's paying work just the same.
(Score: 2) by krishnoid on Wednesday October 02, @07:06AM
I guess we've seen it coming for a couple decades [youtu.be] at this point.
(Score: 1) by khallow on Monday September 30, @03:24AM
Keep in mind that reality "gaslit" those jobs too. That's in large part why those jobs don't exist any more.
(Score: 1) by khallow on Monday September 30, @03:25AM
Keep in mind that reality "gaslit" those jobs too. That's in large part why those jobs don't exist any more.
Indeed. Maybe the greater suffering of the past made us happier? Sounds like a good experimental project for a sadistic AI to try.
(Score: 1) by pTamok on Monday September 30, @10:00AM
In the absence of any other constraints, it seems like an efficient solution*.
*If you mean 'fix' as in 'repair'. On the other hand, if you mean 'fix' as in 'fix in place'/'make permanent' (e.g. a 'fixed grin', or 'fixing a photographic negative') then getting rid of humans would not make global warming permanent. I'm not sure which meaning an LLM would use.
(Score: 3, Informative) by pTamok on Sunday September 29, @11:45PM (3 children)
...were roughly 6,000 English.
6,000 thousand days is a little over 16,400 years.
He might be right.
(Score: 3, Interesting) by julian on Monday September 30, @03:51AM (2 children)
It's ambiguous. In Modern American English I would interpret it to mean between 3 and 11. My rational is that If it was 1000 you'd just say, one thousands days. If it was a couple (2) thousand days you'd say, a couple thousand days. A few is definitely more than a couple, and definitely less than a dozen. But there's a gray area with other words we use like, several, and even, many.
So clearly we will definitely likely possibly get superintelligent AI within 3000 and 11000 days.
(Score: 1) by pTamok on Monday September 30, @09:52AM
I was being somewhat tongue-in-cheek.
'Few' is a comparator as well as an absolute. I'd agree that 'few' in absolute terms means more than 'a couple' but less than 'some', but there is overlap, depending on context. But, when used as a comparator, it means that one number is considerably less than another - so the 'few' at Thermopylae [wikipedia.org] were between 1,000 and 2,000 men compared to the Persian army of over 100,000, and 'the Few [wikipedia.org]' in the context of the RAF pilots of the Second World War and the ariel defence of the UK in summer/autumn 1940 [wikipedia.org] compared to the whole UK military. Similarly, if you say 'few people have only one leg', it's a comparator with the number of people with other numbers of legs - the absolute number of one-legged people is quite large, and not just a bit more than 'a couple'. The number of people who have had a lower limb amputated in the UK population is somewhere between 5 and 25 per 100,000 [bmj.com], so for the current UK population of about 67 million, that gives a lower bound of about 335 people (that number can be criticised in many ways, not least because some amputations could be part of double amputations). 335 is not 'a few', but is 'few' compared to the UK population.
(Score: 2) by HiThere on Monday September 30, @02:01PM
So it fits in with my projection of 2035 plus or minus 5 years. Which I've held for probably a couple of decades now.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 1, Insightful) by Anonymous Coward on Monday September 30, @01:28AM (2 children)
Kurzweil's passed the crack pipe on I see, this time with Bill Gates and "Real Money" involved.
Netflix infomercial "Whats Next.. the Future with Bill Gates" if you want to be REALLY scared.
(Score: 0) by Anonymous Coward on Monday September 30, @05:31AM
Stay tuned for the next celebrity opinion piece. I SAID STAY TUNED!!`
(Score: 2) by ikanreed on Monday September 30, @10:46AM
No "passed the crack pipe" here OpenAI's upper echelons are all staffed by his direct acolytes. These people have been full-on believers since day one.
And like Kurzeil, they just think "more is magic" and don't actually understand or care about the (really quite slow and difficult) research process that delivered them the LLMs they got rich on. I won't say "AGI" is impossible in 10 years, but it won't be built on adding more cores and training data to ChatGPT
(Score: 2) by Rosco P. Coltrane on Monday September 30, @03:19AM
I worry about surveillance-based, Big-Tech-backed superintelligence.
(Score: 3, Insightful) by stormwyrm on Monday September 30, @03:55AM (1 child)
What passes for AI these days seems to struggle even getting simple tasks right [elpais.com], and if you put more complex tasks to it, it then becomes very difficult to detect when they make mistakes. For certain applications, e.g. the detection of patterns in noisy data (like facial/object recognition) this kind of error is tolerable since I believe there are ways to quantify it, but it seems that this level of error is inherent to the current favoured LLM architecture and will probably not just go away unless some kind of substantial change in the architecture of the systems is made. Building 5 gigawatt data centres [arstechnica.com] to run ever-larger LLM models that just make the same kinds of basic mistakes just sounds like total insanity.
Numquam ponenda est pluralitas sine necessitate.
(Score: 2) by HiThere on Monday September 30, @02:06PM
Don't confuse AI with AGI. We do not yet have AGI. AI we've got. Each version only works in a narrow domain.
There are arguments against this position, but they all depend on non-quantifiable definitions of intelligence. (Or, if you prefer, "non-operational".) I'll agree that the most common definitions of intelligence are non-quantifiable, but arguing about them is like arguing about theology...pretty much useless.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 4, Interesting) by Rosco P. Coltrane on Monday September 30, @04:54AM (3 children)
A year is 365 days. A few thousand days is several years away. Or in other words, even in regular computing terms, several generations away - and probably more for a fledgling technology like AI.
Does Altman get paid to spew out meaningless tripe like that?
(Score: 3, Touché) by Anonymous Coward on Monday September 30, @05:40AM
Why, yes - yes he does.
(Score: 2) by Freeman on Monday September 30, @01:35PM
Is it "a few thousand days" due to limitations on current computational power? Could that be massively reduced with technical innovations over the time period?
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 0) by Anonymous Coward on Tuesday October 01, @03:59AM
You could say 7300 days falls into "a few thousand days". It's also twenty years. Sounds a lot like the old promises behind controlled nuclear fusion power, except those guys actually had a sound theoretical basis for why their approaches could work, but the lack of proper investment ensured that it was always 7300 days or so away. Altman doesn't even have that.
(Score: 2) by looorg on Monday September 30, @07:40AM
So this is after they have started to pull in more money, monetizing everything and the CEO is getting 7% of the company as a reward? All about the AI$.
https://fortune.com/2024/09/27/sam-altman-openai-equity-stake/ [fortune.com]
So it may or may not be true.
(Score: 2) by VLM on Monday September 30, @03:28PM (1 child)
Still waiting for my superhighway, the information superhighway. I was promised that would arrive shortly and fix everything as long as no one looks too closely at the financial statements, and that turned out just great.
AI will be a tool only for smart people, of which the rich are a pretty small fraction. Its just another IT / productivity tool, and like every other tool ever made most of the population will try to use a hammer as a screwdriver at best and as a doorstop at worst.
Super-Clippy is not going to help the cognitive bottom 90%, as usual.
This goes back a ways. Consider the self-improvement and education abilities Gutenberg's printing press gives to the 1 in 5 Americans who are functionally illiterate.
(Score: 2) by hendrikboom on Tuesday October 01, @02:09AM
The internet did not turn out to be an information superhighway.
It's an ocean. And like oceans of water, it has waves, tides, storms, pleasure boats, cargo carriers, and pirates.
(Score: 2) by VLM on Monday September 30, @03:40PM
The future belongs to whoever shows up.
Lets say his quote is correct. That would imply the birth rate in Amish communities is vastly lower than the birth rate of hipster urbanites. (checks the numbers...) Hmm I think I have to disagree.
Its like claiming drivethrus will eliminate waitress employment. Well, I'm sure there's an impact, but humans like getting their food from hot young human females (and variations etc).
Sure in some boring dollars and cents fashion the most economical way to logistically distribute beer is likely Amazon drone delivery in the long run. I don't see bars going out of business any time soon, LOL.
I might not feel too strong of a pull to Amish life, but if the alternative gets worse... sure.
Also, as time goes on I could totally see a 1950s DieselPunk aesthetic creating some kind of Neo-Amish community. No electronics beyond vacuum tubes, no electronic entertainment beyond B+W video broadcasts, etc. Frankly I'd visit that as a theme resort. A giant dome faraday cage containing a functional civilization surrounding by our non-functional civilization. They'd have to be pretty careful whom they let in, LOL.
(Score: 2) by mcgrew on Tuesday October 01, @05:12PM (1 child)
AI is magic. Not Harry Potter magic, David Copperfield magic. I say that as someone who was a practicing magician as a child, and someone who studied computers down to the schematic wiring diagrams (I highly recommend The TTL Cookbook) and have programmed in assembly. The same magic movie makers use. AI is simply huge databases running code.
No Turing machine will ever be sentient. Quantum computing? I'm ignorant and have no idea, maybe in a few hundred years.
Explanation here. [soylentnews.org]
That's not to say that huge databases and clever programming can't be extremely dangerous. The more powerful the tool, the greater the chances of its misuse.
Poe's Law [nooze.org] has nothing to do with Edgar Allen Poetry
(Score: 0) by Anonymous Coward on Wednesday October 02, @04:53AM
Isn't it possible, however, a case of the Fish being the last discover water? Magic, and Turing Machines... Turing's initial conceptual idea of what is now called a, 'Turing Machine' (https://en.wikipedia.org/wiki/Turing_machine [wikipedia.org]), is exactly what DNA and a Eukaryotic host is: a Turing Machine.
I think I can grasp the sentiment of your argument: a much more grounded, skeptical, conceptualization of what AGI would be, would require, etc.; and how, we are not there. That the Teddy Bear isn't alive/sentient, it's just that, with enough imagination, or, ignorance, one could believe it to be sentient.
However, forgetting the magic, it seems clear that, a single Eukaryotic Cell, is, in fact, a Turing Machine, by definition: and behaves in ways contrary to that of Billiard Balls. Then there are collective organizations of Eukaryotic Cells, such as plants, animals, fungi. Of these organizations of Cells, some produce humans, which have brains, which, in their own right, seem to not be so much magic, but, quite mystical.
So in the case of the Fish being the last to discover the water, perhaps it's a case of mistaken identity as well: we tend to focus on the chicken, rather than the egg: the DNA/hardware/code, rather then the requisite coupling with that which is to be acted upon, and act upon those instructions.
Yes, it's unlikely a toaster in the year 2042 will be something worth emotional investment/attachment; yet, despite that, there are toasters from the 1950's, adults may still cling to, due to their sentimental value. There are aspects of reality, that, go unexplained, not in some magical sense, due to ignorance, but in a sort of mystical sense.
Magic is clever. Magic can delight, thrill, deceive, etc.. What is mystical, however, is a bit different. For me, I feel as though, what is mystical, is a sort of pointer to something, that says: there is more here than meets the eye, or could meet the eye. What is mystical is profound.
So perhaps we concern ourselves too much with the tape, or the read/write head, and instead remain a bit blind to the negated opposite of that contraption: whether it be situated at a demonstration to be displayed as a parlor trick, or, part of a collective of similar units, to assemble a complex idea.