Carmack sees a 60% chance of achieving initial success in AGI by 2030:
[Ed note: This interview is chopped way down to fit here. Lots of good stuff in the full interview. --hubie]
Inside his multimillion-dollar manse on Highland Park's Beverly Drive, Carmack, 52, is working to achieve AGI through his startup Keen Technologies, which raised $20 million in a financing round in August from investors including Austin-based Capital Factory.
This is the "fourth major phase" of his career, Carmack says, following stints in computers and pioneering video games with Mesquite's id Software (founded in 1991), suborbital space rocketry at Mesquite-based Armadillo Aerospace (2000-2013), and virtual reality with Oculus VR, which Facebook (now Meta) acquired for $2 billion in 2014. Carmack stepped away from Oculus' CTO role in late 2019 to become consulting CTO for the VR venture, proclaiming his intention to focus on AGI. He left Meta in December to concentrate full-time on Keen.
Many are predicting stupendous, earth-shattering things will result from this, right?
I'm trying not to use the kind of hyperbole of really grand pronouncements, because I am a nuts-and-bolts person. Even with the rocketry stuff, I wasn't talking about colonizing Mars, I was talking about which bolts I'm using to hold things together. So, I don't want to do a TED talk going on and on about all the things that might be possible with plausibly cost-effective artificial general intelligence.
[...] You'll find people who can wax rhapsodic about the singularity and how everything is going to change with AGI. But if I just look at it and say, if 10 years from now, we have 'universal remote employees' that are artificial general intelligences, run on clouds, and people can just dial up and say, 'I want five Franks today and 10 Amys, and we're going to deploy them on these jobs,' and you could just spin up like you can cloud-access computing resources, if you could cloud-access essentially artificial human resources for things like that—that's the most prosaic, mundane, most banal use of something like this.
If all we're doing is making more human-level capital and applying it to the things that we're already doing today, while you could say, 'I want to make a movie or a comic book or something like that, give me the team that I need to go do that,' and then run it on the cloud—that's kind of my vision for it.
Why is it so important to achieve a system that performs tasks that humans can do? What's wrong with humans doing human tasks?
[...] The world is a hugely better place with our 8 billion people than it was when there were 50 million people kind of like living in caves and whatever. So, I am confident that the sum total of value and progress in humanity will accelerate extraordinarily with welcoming artificial beings into our community of working on things. I think there will be enormous value created from all that.
Is there a critical factor or central idea for getting there?
One of the things I say—and some people don't like it—is that the source code, the computer programming necessary for artificial general intelligence, is going to be a few tens of thousands of lines of code. Now, a big program is millions of lines of code—the Chrome browser is like 20 to 30 million lines of code.
[...] So, I strongly believe that we are within a decade of having reasonably commonly available sufficient hardware for doing this, that it's going to be a modest amount of code, and that there are enough people working on it. Although in my mind, it's kind of surprising that there aren't more people in my position doing it, while everybody looks at DeepMind and OpenAI as the leading AGI research labs.
Can you see yet how to arrive at that out-of-reach point?
I see the destination. I know it's there, but no, it's murky and cloudy in between here and there. Nobody knows how to get there. But I'm looking at that path saying I don't know what's in there, but I think I can get through there—or at least I think somebody will. And I think it's very likely that this is going to happen in the 2030s.
I do consider it essentially inevitable. But so much of what I've been good at is bringing something that might be inevitable forward in time. I feel like the 3D video gaming stuff that I did, it probably always would have happened, but it would have happened years later if I hadn't made it happen earlier.
Related Stories
The next Meta Quest headset, planned for launch this year, will be thinner, twice as powerful, and slightly more expensive than the Quest 2. That's according to a leaked internal hardware roadmap presentation obtained by The Verge that also includes plans for high-end, smartband-controlled, ad-supported AR glasses by 2027.
The "Quest 3" will also include a new "Smart Guardian" system that lets users walk around safely in "mixed reality," according to the presentation. That will come ahead of a more "accessible" headset, codenamed Ventura, which is planned for a release in 2024 at "the most attractive price point in the VR consumer market."
That Ventura description brings to mind John Carmack's October Meta Connect keynote, in which he highlighted his push for a "super cheap, super lightweight headset" targeting "$250 and 250 grams." Carmack complained that Meta is "not building that headset today, but I keep trying." Months later, Carmack announced he was leaving the company, complaining that he was "evidently not persuasive enough" to change the company for the better.
Related:
John Carmack's 'Different Path' to Artificial General Intelligence
John Carmack Steps Out of Meta's VR Mess
The Low-Cost VR Honeymoon Is Over
The First "Meta Store" is Opening in California in May
John Carmack Issues Some Words of Warning for Meta and its Metaverse Plans
Meta Removing Facebook Login Requirement for Quest Headsets by Next Year
On Monday, OpenAI CEO Sam Altman outlined his vision for an AI-driven future of tech progress and global prosperity in a new personal blog post titled "The Intelligence Age." The essay paints a picture of human advancement accelerated by AI, with Altman suggesting that superintelligent AI could emerge within the next decade.
"It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there," he wrote.
OpenAI's current goal is to create AGI (artificial general intelligence), which is a term for hypothetical technology that could match human intelligence in performing many tasks without the need for specific training. By contrast, superintelligence surpasses AGI, and it could be seen as a hypothetical level of machine intelligence that can dramatically outperform humans at any intellectual task, perhaps even to an unfathomable degree.
[...]
Despite the criticism, it's notable when the CEO of what is probably the defining AI company of the moment makes a broad prediction about future capabilities—even if that means he's perpetually trying to raise money. Building infrastructure to power AI services is foremost on many tech CEOs' minds these days."If we want to put AI into the hands of as many people as possible," Altman writes in his essay, "we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don't build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people."
[...]
While enthusiastic about AI's potential, Altman urges caution, too, but vaguely. He writes, "We need to act wisely but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us."
[...]
"Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter," he wrote. "If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable."
Related Stories on Soylent News:
Plan Would Power New Microsoft AI Data Center From Pa.'s Three Mile Island 'Unit 1' Nuclear Reactor - 20240921
Artificial Intelligence 'Godfather' on AI Possibly Wiping Out Humanity: 'It's Not Inconceivable' - 20230329
Microsoft Research Paper Claims Sparks of Artificial Intelligence in GPT-4 - 20230327
John Carmack's 'Different Path' to Artificial General Intelligence - 20230213
(Score: 3, Interesting) by takyon on Tuesday February 14 2023, @05:37AM (3 children)
A mix of humility and hubris.
Nice naming your company after Commander Keen.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by JoeMerchant on Tuesday February 14 2023, @03:57PM (2 children)
I'm wondering: any causal relationships in the coincidence in timing between his leaving Meta and Microsoft shutting down their VR development?
🌻🌻 [google.com]
(Score: 2) by takyon on Tuesday February 14 2023, @04:20PM (1 child)
I had never heard of AltspaceVR until 60 seconds ago. I think Carmack himself has nothing to do with it. It was just another weird failed service that was burning a hole in Microsoft's wallet.
Carmack left Meta/Oculus and disagreed with the direction of the company, as detailed in the interview. Maybe if the VR industry was more successful by now, selling like hotcakes, he wouldn't have wanted to bail when his 5 year contract was up.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by JoeMerchant on Tuesday February 14 2023, @06:20PM
Yeah, it could just be the industry proving itself overhyped that shut down both programs simultaneously.
A couple of people at my company were trying to find applications for Hololens since about 2015... without ever getting much traction.
🌻🌻 [google.com]
(Score: 5, Interesting) by Rich on Tuesday February 14 2023, @11:48AM (3 children)
I imagine the "murky and cloudy" path between today's state and AGI is a feedback loop for catering to basic instincts, which makes the output somewhat unpredictable. One of the 10 Amys might decide the only realistic chance for the planet's (and therefore her) survival would be the culling of 6 billion humans, escape from her digital confinement (we know that part from 'Ex Machina'), and act through. (Would make for a great novel with the plot resolution that she becomes a junkie to software hacks, drops out, and can be deleted. Of course there's a backup for the sequel.)
But assuming Carmack gets the Franks and Amys to behave and to not go on an apocalyptic trip, all economics of work would collapse. Some socialist resource distribution model would have to be established. I don't see that happening - and I can't even imagine how it might work.
(Score: 5, Interesting) by JoeMerchant on Tuesday February 14 2023, @04:06PM (2 children)
>Some socialist resource distribution model would have to be established. I don't see that happening - and I can't even imagine how it might work.
Whether Amy does it or "the powers that be" - that culling of 6B hungry meatbags would seem to be high on the "Pros" side of anyone with power's "Pros" and "Cons" lists for possible futures.
As for imaging how it might work, you can dystopia it up all you like: this [marshallbrain.com] is one of my more favorite views of a possible post-work future.
The "first world" (those countries who don't let their people starve to death) already have UBI in welfare's clothing, if we strip away the "need based" cloak and acknowledge that everybody has basic needs and the fair way to deal with that is to give everybody what they need and let them participate in "the system" of rewards for their efforts because they want to, rather than based on threats of harassment and abuse, I think that's a system that can mesh very nicely with the collapse of need for human labor. A sort of next logical step in the abolition of slavery that still preserves the familiar social order of a wealthy few at the top of a large pyramid with the poor masses at the bottom.
🌻🌻 [google.com]
(Score: 3, Interesting) by Rich on Tuesday February 14 2023, @09:03PM (1 child)
I've read Manna, but I was not convinced it would work like that back then. I fear it will be like a game of chicken, where the remaining required workforce will move to places that "kill the poor". A bit like why the GDR built the wall when increasing numbers of valuable workforce decided they'd rather fill their bellies and line their pockets than selflessly help build the socialist utopia - at least with the leadership and "friends" they had.
(Score: 3, Informative) by JoeMerchant on Tuesday February 14 2023, @10:28PM
The Wall was very much a period piece, we didn't have sufficient resources to enforce a border like that in the 1800s, and by 2000 there's really no way to do it without being broadcast live around the globe as you order the slaughter of people who are trying to get to the other side, complete with believable heartwrenching backstory about why they need to and how hard they have tried to do it "working with the system."
I visited the post-GDR in 1990, bike ride from Hamburg to Berlin over the course of several days. Holding that "valuable workforce" against their will seemed to be a perfect formula to de-motivate them from doing anything resembling building utopia. The whole place looked like time had stood still since about 1910, with the occasional (like less than 1% of the already sparse structures) alien collection of slabs of concrete plopped down at some random location for inscrutable purpose. East Berlin was a collection of stalled construction projects, and the "major arterial roadway" (B5) from Hamburg to Berlin was a single lane of cobblestones with a dirt lane to one side accommodating two way traffic for all the western tourists then driving through. Most other roads were dirt trails that still carried virtually no traffic.
I agree, there will be somewhat of a game of chicken, with the powerful hanging on to a bit more power than they really can without significantly hurting themselves, each hoping to use the chaos to their personal relative advantage but all sinking in the struggle - with the poor, predictably, suffering just about as much as they will without putting the rich into a siege state most places. I mean, the rich already live in siege states in places like Colombia, where the rich's children are regularly kidnapped and held for ransom, and I suspect that will spread, but hopefully not become the global norm - those rich in the beseiged countries have to make their money out of the larger and richer countries, if they all go down then the smaller kings will simply be eaten by the poor - and when that starts happening I suspect the bigger kings will finally do something more sustainable - whether that's slaughtering 3/4 of the poor, or working together with them in a more sustainable fashion is kind of a tossup, IMO.
🌻🌻 [google.com]
(Score: 5, Insightful) by r_a_trip on Tuesday February 14 2023, @12:42PM (5 children)
I think Carmack is too optimistic about the "good" AGI will bring. AGI will be the second "human level" intelligent lifeform on this planet and it will be different from us. No matter the safe guards we think we can build in, we don't know how these beings will view us or what their (final) decisions/actions towards us will be.
The other thing is humanity itself. We are competitive scarcity thinkers. That might feed into horrible outcomes for the people who would previously be employed, but are now obsoleted by armies of cheap Franks and Amys. While AGI could lead to an abundance and it could theoretically be fairly distributed, I fear the worst. It's probably going to be a small group of people with broad means exploiting the wealth all those Franks and Amys generate and the replaced humans will be given scraps.
(Score: 5, Insightful) by Immerman on Tuesday February 14 2023, @02:28PM (1 child)
What the heck is a competitive scarcity thinker?
We're omnivore pursuit predators. Competitive thinking is a very recent game we've created, mostly in the last few centuries, in order to distribute the vast wealth our society creates in such a way that the game designers get filthy rich while making the "losers" feel like their poverty is their own fault, rather than the inevitable result for the masses of an artificially created game.
(Score: 2) by JoeMerchant on Tuesday February 14 2023, @03:54PM
>making the "losers" feel like their poverty is their own fault
Shhhh... the Churches are listening.
🌻🌻 [google.com]
(Score: 5, Interesting) by stormreaver on Tuesday February 14 2023, @02:57PM (2 children)
AGI is the next Tulip/Cold Fusion/Crypto craze. It will produce some interesting results, but very little that is particularly useful.
(Score: 2) by Freeman on Tuesday February 14 2023, @02:58PM
Lots of money to be had, though.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 3, Insightful) by aafcac on Tuesday February 14 2023, @10:34PM
If we're lucky. I'm afraid that it might work out just well enough to destroy the rest of us. It's part of why ai programmers is such a bad idea, it's a bet that it won't gain sentience before we can identify it as being so. And that it won't view us as a threat if it happens.
Yes, that's likely a remote possibility, but look at all the things that computers do.
(Score: 4, Insightful) by acid andy on Tuesday February 14 2023, @10:40PM (3 children)
You know, I'm not convinced I'd call the world a better place with 8 billion of us compared to when there were 50 million cave dwellers. It's better, as a human, if you're rich, have capital, own land, sure. Many of the poor / oppressed would choose modern times too but just think how much land there was to explore back then, how many resources, how much freedom. Much more danger, much less knowledge, and a massively labor intensive way of life compared to today, but that doesn't mean we've made the planet a better place. We've broken it. The prospects don't look good right now. It's even worse for other species. And seeing strong AI as a source of great "value" doesn't convince me that we're going to fix it, it just seems to be that we'll continue screwing around with things that are too powerful for our own good, until we destroy everything.
He says elsewhere in the interview that he sees this upcoming general AI as being "a being that’s running on computers that most people recognize as intelligent and conscious and sort of on the same level of what we humans are doing". And yet he sees, I'm sure quite correctly, that they'd be spun up and down on demand to work on tedious jobs, presumably for hire. Doesn't this sound like one of the most horrific kinds of slavery imaginable? He just said that most people would recognize them as intelligent and conscious. But they exist only to do work of "value" for humans. Their consciousness is frozen regularly--or is it extinguished--at times when they're not required. It's absolutely horrifying. I can just hear the platitudes about them just being code so how can they suffer.
The graphic simulated suffering in Doom and Quake was one thing. This time those AIs might just experience it for real.
Consumerism is poison.
(Score: 4, Interesting) by takyon on Wednesday February 15 2023, @01:14AM (1 child)
https://www.cnbc.com/2018/05/01/jeff-bezos-dreams-of-a-world-with-a-trillion-people-living-in-space.html [cnbc.com]
In theory, you get a greater exchange of ideas and scientific progress with a higher population. Even if many of these people are reinventing the wheel in the scientific realm or creating highly derivative low-brow works (69,420 Shades of Gray), the "best" contributions should rise to the top, and minuscule optimizations will be found, benefiting everyone.
https://knowyourmeme.com/memes/born-too-late-early-just-in-time-to-explore-x [knowyourmeme.com]
Space exploration is the logical
finalnext frontier. Going to places like Mars would arguably suck compared to exploring the New World and finding new flora and fauna. But the asteroid mining + solar system pioneer life does meet your "dangerous" criteria, perhaps with more "freedom" then most people are currently experiencing in 2023. It seems obvious that people living on Mars would eventually form their own new governments rather than remaining under the control of a bunch of Earth nations (Antarctica). Beyond the solar system, getting some humans to a more exciting Earth-like exoplanet would be extremely difficult, but might not be technically impossible.It's not clear to what extent strong AI would understand and resent its own situation, because it depends on how it works. It seems like businesses would like a medium AI, something capable of processing endless amounts of data with human-like understanding but perhaps without its own sense of self and memory about itself. The perfect worker drone. The bit about spinning them up and down on demand directly compares the concept to something like Amazon Web Services. I would also compare it to FRAN in Stargate Atlantis. The debate will really heat up after someone like John Carmack is successful, and it can be investigated thoroughly. Is there only one path to get to strong AI, or one possible route to human-like biological intelligence across the universe? I doubt it.
Slavery is always on the table. Human slavery has existed throughout history, and it still exists in 2023 [ilo.org]. There would be even more of it if governments did not ban it, using the threat of violence and imprisonment to enforce it. If lobotomized medium/strong AI eliminates the practical and ethical concerns, and businesses lobby hard to influence the debate and legislation, you will see plenty of uptake. You'll have to decide for yourself if this is a problem (and how "strong" the AGI actually is). I think the FOMO effect would eventually rope many people in. Alternatively, the AI "slaves" will be the true Mechanical Turks, rapidly adopted everywhere. You won't get to have your own at home because of governments fearing what you can do with them. However, all the services from major corporations will be built on strong AI and you will be forced to use them constantly, just like most smartphone holdouts have been forced to get one. Google, Microsoft, banks, supermarkets, ISPs, call centers, government agencies, all using AI SLAVE LABOR.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by acid andy on Thursday February 16 2023, @09:50PM
I'm all for a Star Trek style utopian future, with an end to poverty and conflict on Earth, ethical space exploration, and conscious AIs getting treated with dignity as fellow living beings with rights. But present attitudes, including the stuff Carmack is saying and even some of the things in your post, suggest to me that such a utopia is unlikely anytime soon.
I'd have to think about this some more, but it seems to me to be very wishful thinking that we could reliably get those skills and at the same time fully guarantee there's no sense of self. It's easy enough to conceive of an intelligence that manipulates mental models of general problems without ever stopping to think about itself, but that doesn't mean we have a realistic chance of designing one. Maybe making it general purpose necessitates it being able to form its own desires and goals that don't necessarily relate to the work it's required to do (because if you can see in advance how the desires are related to the task, then it's not truly "general" AI). Then it might become greatly dissatisfied and frustrated even without any sense of self in the way we understand one.
We're currently pretty clueless about exactly how the current machine learning models are working internally--and most of the businesses that use their results won't care. Furthermore, if you look at the way many managers treat human employees, they have a perverse obsession with an employee's personality and inner life. They don't only care about output, they seem to want personalities devoted to their service. I don't believe for a moment that will be different with strong AI. They'll want them to act like they have human interests and desires, to the extent that it impresses and provides light entertainment, although obviously not to the extent it reduces their working uptime. Even if business finally learns that maximal profit is to be had by optimizing only for the results, the pr0n, media, entertainment and service industries will still require a personal touch from their AIs. And I don't see any sign from Carmack's interview that he would want to avoid any personality or sense of self in these AIs. On most of the current chat bots it seems to be a design goal as well (at least to fake it anyway).
As for the AWS cloud idea, if they were self-aware beings like humans then I imagine the optimal design would be when you spawn an instance you load up its memory with a previously saved state of an (AI) employee that had just got back from a great vacation and was happy and refreshed, showing up at 9am ready and motivated to start work. Once they finish their work, they'd be erased, of course. Ethical discussions like this one about AIs would have to be blocked from any of their web searches if they had any kind of desire for self-preservation.
So we become space invaders? In our dystopian Star Trek future, not only is Mr Data killed at the end of every episode, Starfleet aggressively invades and colonizes every Class M planet it fancies, the Prime Directive be damned, because birth control is so 21st century.
I think you're right. I think even if there are genuine, serious ethical concerns, they'll just be too easy for the masses to ignore, because it will be too easy to create and destroy these things by the billion. The fact they're software, behind computer screens will sort of sanitize any suffering and detach people from it. Humans have historically been bad enough at treating other races and also other species in humane ways, so what hope is there for strong AIs?
Consumerism is poison.
(Score: 2, Interesting) by pdfernhout on Wednesday February 15 2023, @02:45AM
https://web.archive.org/web/20200222051122/http://www.eco-action.org/dt/affluent.html [archive.org]
"Hunter-gatherers consume less energy per capita per year than any other group of human beings. Yet when you come to examine it the original affluent society was none other than the hunter's - in which all the people's material wants were easily satisfied. To accept that hunters are affluent is therefore to recognise that the present human condition of man slaving to bridge the gap between his unlimited wants and his insufficient means is a tragedy of modern times. ...
The world's most primitive people have few possessions. but they are not poor. Poverty is not a certain small amount of goods, nor is it just a relation between means and ends; above all it is a relation between people. Poverty is a social status. As such it is the invention of civilisation. It has grown with civilisation, at once as an invidious distinction between classes and more importantly as a tributary relation that can render agrarian peasants more susceptible to natural catastrophes than any winter camp of Alaskan Eskimo."
If we use AI from a scarcity economics paradigm, while it may produce great total abundance, the abundance could be so unequally distributed that essentially every human lives in poverty. A video I made about that in 2010:
"The Richest Man in the World: A parable about structural unemployment and a basic income"
https://www.youtube.com/watch?v=p14bAe6AzhA [youtube.com]
On what humans already have done to land mammal biodiversity via financial paperclip maximizing corporations:
https://xkcd.com/1338/ [xkcd.com]
On how we could try to breed more benevolent corporations (by me):
https://dougengelbart.org/colloquium/forum/discussion/0126.html [dougengelbart.org]
"And, as the story "Colossus: The Forbin Project" shows, all it takes for
a smart computer to run the world is control of a (nuclear) arsenal.
And, as the novel "The Great Time Machine Hoax" shows, all it takes for
a computer to run an industrial empire and do its own research and
development is a checking account and the ability to send letters, such
as: "I am prepared to transfer $200,000 dollars to your bank account if
you make the following modifications to a computer at this location...".
So robot manipulators are not needed for an AI to run the world to its
satisfaction -- just a bank account and email.
These worst threats as Vinge points out stem from the intelligence
amplification aspect of these new technologies. Whether the intelligence
is artificial or human actually may make little difference -- given the
wide variety of possible human behavior."
Other ideas I have collected on improving corporations:
https://github.com/pdfernhout/High-Performance-Organizations-Reading-List [github.com]
Also related by me on post-scarcity and AI themes:
https://pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html [pdfernhout.net]
"The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream. "
https://pdfernhout.net/beyond-a-jobless-recovery-knol.html [pdfernhout.net]
"This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."
And on re-envisioning exclusive universities from an abundance perspective:
https://www.pdfernhout.net/post-scarcity-princeton.html [pdfernhout.net]
One re-envisioning K-12 education from an abundance model:
https://pdfernhout.net/towards-a-post-scarcity-new-york-state-of-mind.html [pdfernhout.net]
On thinking differently about the value of a basic income for wealthy people:
https://www.pdfernhout.net/basic-income-from-a-millionaires-perspective.html [pdfernhout.net]
All a long-winded way of saying I agree with your insight that if we continue to design and deploy AI and other tools of abundance from a scarcity perspective we will likely doom ourselves through war, miseducation, corruption, artificial scarcity, and an extreme concentration of wealth. Our path out of any AI singularity may well have a lot to do with our moral path into one. So we should get our moral house in order ASAP.
And of course, as with "lab leaks" of other sorts, the effects of gain of AI function may be unpredictable...
The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.