Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.
posted by janrinok on Monday October 20, @04:41AM   Printer-friendly

The BBC published a rambling report on AI and Tech billionaires building large fully-autonomous "basements" in different locations. I love the quote "I once met a former bodyguard of one billionaire with his own 'bunker', who told me his security team's first priority, if this really did happen, would be to eliminate said boss and get in the bunker themselves. And he didn't seem to be joking."

Mark Zuckerberg is said to have started work on Koolau Ranch, his sprawling 1,400-acre compound on the Hawaiian island of Kauai, as far back as 2014

It is set to include a shelter, complete with its own energy and food supplies, though the carpenters and electricians working on the site were banned from talking about it by non-disclosure agreements, according to a report by Wired magazine.

Asked last year if he was creating a doomsday bunker, the Facebook founder gave a flat "no". The underground space spanning some 5,000 square feet is, he explained, "just like a little shelter, it's like a basement".

Then there is the speculation around other tech leaders, some of whom appear to have been busy buying up chunks of land with underground spaces, ripe for conversion into multi-million pound luxury bunkers.

Reid Hoffman, the co-founder of LinkedIn, has talked about "apocalypse insurance". This is something about half of the super-wealthy have, he has previously claimed, with New Zealand a popular destination for homes.

So, could they really be preparing for war, the effects of climate change, or some other catastrophic event the rest of us have yet to know about?

In the last few years, the advancement of artificial intelligence (AI) has only added to that list of potential existential woes. Many are deeply worried at the sheer speed of the progression.

Ilya Sutskever, chief scientist and a co-founder of Open AI, is reported to be one of them.

In a meeting, Mr Sutskever suggested to colleagues that they should dig an underground shelter for the company's top scientists before such a powerful technology was released on the world, [...] according to a book by journalist Karen Hao.

"We're definitely going to build a bunker before we release AGI," he's widely reported to have said, though it's unclear who he meant by "we".

What's more, it's unlikely to arrive as a single moment. Rather, AI is a rapidly advancing technology, it's on a journey and there are many companies around the world racing to develop their own versions of it.

But one reason the idea excites some in Silicon Valley is that it's thought to be a pre-cursor to something even more advanced: ASI, or artificial super intelligence - tech that surpasses human intelligence.

It was back in 1958 that the concept of "the singularity" was attributed posthumously to Hungarian-born mathematician John von Neumann. It refers to the moment when computer intelligence advances beyond human understanding.

Those in favour of AGI and ASI are almost evangelical about its benefits. It will find new cures for deadly diseases, solve climate change and invent an inexhaustible supply of clean energy, they argue.

Elon Musk has even claimed that super-intelligent AI could usher in an era of "universal high income".

"If it's smarter than you, then we have to keep it contained," warned Tim Berners Lee, creator of the World Wide Web, talking to the BBC earlier this month.

Governments are taking some protective steps. In the US, where many leading AI companies are based, President Biden passed an executive order in 2023 that required some firms to share safety test results with the federal government - though President Trump has since revoked some of the order, calling it a "barrier" to innovation.

Meanwhile in the UK, the AI Safety Institute - a government-funded research body - was set up two years ago to better understand the risks posed by advanced AI.

And then there are those super-rich with their own apocalypse insurance plans.

"Saying you're 'buying a house in New Zealand' is kind of a wink, wink, say no more," Reid Hoffman previously said. The same presumably goes for bunkers.

But there's a distinctly human flaw.

I once met a former bodyguard of one billionaire with his own "bunker", who told me his security team's first priority, if this really did happen, would be to eliminate said boss and get in the bunker themselves. And he didn't seem to be joking.

Neil Lawrence is a professor of machine learning at Cambridge University. To him, this whole debate in itself is nonsense.

"The notion of Artificial General Intelligence is as absurd as the notion of an 'Artificial General Vehicle'," he argues.

"The right vehicle is dependent on the context. I used an Airbus A350 to fly to Kenya, I use a car to get to the university each day, I walk to the cafeteria... There's no vehicle that could ever do all of this."

"The technology we have [already] built allows, for the first time, normal people to directly talk to a machine and potentially have it do what they intend. That is absolutely extraordinary... and utterly transformational.

Current AI tools are trained on mountains of data and are good at spotting patterns: whether tumour signs in scans or the word most likely to come after another in a particular sequence. But they do not "feel", however convincing their responses may appear.

Ultimately, though, no matter how intelligent machines become, biologically the human brain still wins. It has about 86 billion neurons and 600 trillion synapses, many more than the artificial equivalents.

"If you tell a human that life has been found on an exoplanet, they will immediately learn that, and it will affect their world view going forward. For an LLM [Large Language Model], they will only know that as long as you keep repeating this to them as a fact," says Mr Hodjat.

"LLMs also do not have meta-cognition, which means they don't quite know what they know. Humans seem to have an introspective capacity, sometimes referred to as consciousness, that allows them to know what they know."

It is a fundamental part of human intelligence - and one that is yet to be replicated in a lab.


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Insightful) by khallow on Tuesday October 21, @02:19PM (4 children)

    by khallow (3766) Subscriber Badge on Tuesday October 21, @02:19PM (#1421604) Journal

    Apologies if this is outside of scope for Soylent ... it's early here.

    I think this is on topic. Not much point to talking about doom prepping without talking about what dooms they're prepping for. For me most disasters and issues just aren't bad enough or large scale enough to count - on their own. Sure, you can have things like a large asteroid strike, supervolcano eruption, or megatsunami that could have global destructiveness. But we're more likely to see "irreversibly screwed" problems from things like nuclear wars or out of control biological weapons (both which in addition to their initial destructiveness would impair human activity for decades or longer). Those would be high on my list.

    But I think the real problems will be human societal structural issues. For example, stagnation of science, culture, and progress. We already saw an example of this in the Communist world - imagine if there wasn't a democratic developed world to compete with the Communist world. There wouldn't be a reason (or rather the knowledge of the reason) for such reforms as Glasnost and things could degrade to the point of complete feudalism or worse.

    Another is societal fragility. A habit of government-oriented approaches is that if there is a problem, instead of fixing the problem, compensating for it. For example, the current problem with AI. Instead of fixing AI business activity by addressing things like the accounting book cooking that supports it or even just allowing AI to suceed and fail on its own, the US federal government has mandated some degree of consumption of AI products - which provides an uncritical revenue stream for poor AI business models to survive for a time. One such problem compensation isn't much of a danger, but then add in more and more. For example, modern US government also supports a global military system, the banking system, real estate price supports via policy, Social Security overspending, and overpriced educational and health care systems. Juggle too many of these balls at one time and a shock might knock most of them out of the air at once. More and more of the energy of society goes into supporting a growing list of problems rather than in maintaining a cushion against coming failure and breakdown. So when breakdown happens, it's more complete. Government has exhausted its ability to keep things going and it often exhausted a lot of other peoples' resources in the process too so they can't help keep things going either.

    Moving on, the final category is dysfunctional governance structures. My view is that if you have this great idea for running a society, but it requires a complete breakdown of your society in order to implement, then your ideas fall on the "irreversibly screwed" list somewhere (depending on likelihood of implementation in event of breakdown). Why? Because if it is truly a better system, then you can just implement it now on small scale and show the benefit. Systems that are genuinely worse can't do that.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   2  
  • (Score: 2) by mcgrew on Tuesday October 21, @04:22PM (3 children)

    by mcgrew (701) <publish@mcgrewbooks.com> on Tuesday October 21, @04:22PM (#1421631) Homepage Journal

    For example, stagnation of science, culture, and progress. We already saw an example of this in the Communist world - imagine if there wasn't a democratic developed world to compete with the Communist world.

    The trouble with communism is it only works at tiny scales; a small village or tribe. For communism on a national scale, it requires autocracy. The autocracy is the problem. Note that socialism is NOT communism; Socialists believe that government should work to promote society, capitalists believe that government should work to promote wealth. THEIR wealth, fuck yours.

    --
    Why do the mainstream media act as if Donald Trump isn't a pathological liar with dozens of felony fraud convictions?
    • (Score: 1) by khallow on Wednesday October 22, @03:23AM

      by khallow (3766) Subscriber Badge on Wednesday October 22, @03:23AM (#1421717) Journal

      Note that socialism is NOT communism; Socialists believe that government should work to promote society, capitalists believe that government should work to promote wealth. THEIR wealth, fuck yours.

      Depends on the flavor of socialism. We covered communism already. In addition, there's some socialist ideas that require global participation: such as any schemes that are harmed by "harmful competition" on taxes, pollution, and labor wages. As to "capitalists"? It's a typical problem of democracy that there is a tendency for everyone to see government as there to promote their personal interests, be it personal wealth or some other thing.

    • (Score: 0) by Anonymous Coward on Friday October 24, @01:07AM (1 child)

      by Anonymous Coward on Friday October 24, @01:07AM (#1421974)

      For communism on a national scale, it requires autocracy. The autocracy is the problem.

      China does have elections. Unlike the USA it has One Party instead of Two[1].
      https://www.bbc.com/news/magazine-19876372 [bbc.com]

      Nowadays, officials need to show their superiors they are able to govern well. They are subjected to annual reviews where factors like GDP growth, tax revenues and social stability in their areas are key. At grassroots levels the Party has allowed some elections, though officially approved candidates almost always win. Some higher officials' promotions are also now approved by limited public consultation.

      The last I checked the approved candidates in the USA almost always win too; only a few "outsiders" once in a while. But none of the candidates are subject to annual reviews of how well they can govern...

      It's like picking the pilot of your plane through elections. Both sides have approved candidates. One side has some candidates at least pretend to have some competence, the other side doesn't even bother with that... Heck you get a candidate with a proven record of crashing planes...

      [1] Both China and the USA have more than two parties but they practically don't count...

      • (Score: 0) by Anonymous Coward on Friday October 24, @01:13AM

        by Anonymous Coward on Friday October 24, @01:13AM (#1421977)

        Heck you get a candidate with a proven record of crashing planes...

        Which might explain the doom prepping billionaires...