Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


posted by janrinok on Monday April 03 2023, @04:10PM   Printer-friendly
from the ai-overlord dept.

https://arstechnica.com/information-technology/2023/03/fearing-loss-of-control-ai-critics-call-for-6-month-pause-in-ai-development/

On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.

Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.

See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group

Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)


Original Submission

Related Stories

More Warnings of an AI Doomsday — This Time From Stephen Hawking 76 comments

The BBC is reporting that Stephen Hawking warns artificial intelligence could end mankind:

Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence. He told the BBC: "The development of full artificial intelligence could spell the end of the human race."

It seems he is mostly concerned about building machines smarter than we are:

Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

This seems to echo Elon Musk's fears. What do you think?

Since Elon Musk said the same[*], some here have disparaged the statement. Stephen Hawking, however, has more street cred[ibility] than Musk. Are they right, or will other factors precede AI as catastrophic scenarios?

[* Ed's note. See: Elon Musk scared of Artificial Intelligence - Again.]

Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons 26 comments

Over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a "military artificial intelligence arms race" and calling for a ban on "offensive autonomous weapons".

The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla's Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 AI and robotics researchers.

The letter states: "AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms."

So, spell it out for me, Einstein, are we looking at a Terminator future or a Matrix future?

While the latest open letter is concerned specifically with allowing lethal machines to kill without human intervention, several big names in the tech world have offered words of caution of the subject of machine intelligence in recent times. Earlier this year Microsoft's Bill Gates said he was "concerned about super intelligence," while last May physicist Stephen Hawking voiced questions over whether artificial intelligence could be controlled in the long-term. Several weeks ago a video surfaced of a drone that appeared to have been equipped to carry and fire a handgun.

takyon: Counterpoint - Musk, Hawking, Woz: Ban KILLER ROBOTS before WE ALL DIE


Original Submission #1Original Submission #2

Elon Musk and Friends Launch OpenAI 12 comments

Elon Musk, a businessman who has described artificial intelligence development as "summoning the demon", is among the backers of the newly launched non-profit OpenAI:

Elon Musk, Peter Thiel and other technology entrepreneurs are betting that talented researchers, provided enough freedom and money, can develop artificial intelligence systems as advanced as those being built by the sprawling teams at Google, Facebook Inc. and Microsoft Corp. Along the way, they'd like to save humanity from oblivion.

The pair are among the backers of OpenAI, a nonprofit company introduced Friday that will research novel artificial intelligence systems and share its findings. Musk, chief executive officer of Tesla Motors Inc. and Space Exploration Technologies Corp. and Sam Altman, president of the Y Combinator, will serve as co-chairman. The nonprofit has received financial backing from Musk, Thiel, co-founder of PayPal Holdings Inc. and Palantir Technologies Inc., Reid Hoffman and others as well as companies including Amazon Web Services and Infosys.

The group's backers have committed "significant" amounts of money to funding the project, Musk said in an interview. "Think of it as at least a billion."

Also at BBC, NYT, Fast Company, TechCrunch, and Hacker News (note the involvement of Sam Altman).

Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" 68 comments

I saw this a few days ago, and am surprised it hasn't been linked on Soylent.

The Information Technology & Innovation Foundation (ITIF) has awarded Elon Musk, Stephen Hawking, and Bill Gates, among others, the second annual ITIF Luddite award. This is due to the tone of their warnings regarding AI during 2015. Details on CNET:

Musk "is the antithesis of a Luddite, but I do think he's giving aid and comfort to the Luddite community," said Rob Atkinson, president of the Washington, DC-based think tank. Musk, Hawking and AI experts say "this is the largest existential threat to humanity. That's not a very winning message if you want to get AI funding out of Congress to the National Science Foundation," Atkinson said.

[...] Last January, [Musk and Hawking] signed an open letter issued by the Future of Life Institute pledging that advancements in the field wouldn't grow beyond humanity's control. In July, they signed another letter urging a ban on autonomous weapons that "select and engage targets without human intervention." The Future of Life Institute researches ways to reduce the potential risks of artificial intelligence running amok. It was founded by mathematicians and computer science experts, including Jaan Tallinn, a co-founder of Skype, and MIT professor Max Tegmark.

Gates last year said he and Musk are on the same page. "I agree with Elon Musk and some others on this and don't understand why some people are not concerned," he said in a Reddit AskMeAnything thread.

What are the thoughts of Soylentils? Deserved? or no?


Original Submission

AI Software Learns to Make AI Software 21 comments

Submitted via IRC for TheMightyBuzzard

Google and others think software that learns to learn could take over some work done by AI experts.

Progress in artificial intelligence causes some people to worry that software will take jobs such as driving trucks away from humans. Now leading researchers are finding that they can make software that can learn to do one of the trickiest parts of their own jobs—the task of designing machine-learning software.

In one experiment, researchers at the Google Brain artificial intelligence research group had software design a machine-learning system to take a test used to benchmark software that processes language. What it came up with surpassed previously published results from software designed by humans.

In recent months several other groups have also reported progress on getting learning software to make learning software. They include researchers at the nonprofit research institute OpenAI (which was cofounded by Elon Musk), MIT, the University of California, Berkeley, and Google's other artificial intelligence research group, DeepMind.

If self-starting AI techniques become practical, they could increase the pace at which machine-learning software is implemented across the economy. Companies must currently pay a premium for machine-learning experts, who are in short supply.


Original Submission

Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" 45 comments

There aren't many people in the world who can justifiably call Mark Zuckerberg a dumb-ass, but Elon Musk is probably one of them.

Early on Tuesday morning, in the latest salvo of a tussle between the two tech billionaires over the dangers of advanced artificial intelligence, Musk said that Zuckerberg's "understanding of the subject is limited."

I won't rehash the entire argument here, but basically Elon Musk has been warning society for the last few years that we need to be careful of advanced artificial intelligence. Musk is concerned that humans will either become second-class citizens under super-smart AIs, or alternatively that we'll face a Skynet-like scenario against a robot uprising.

Zuckerberg, on the other hand, is weary of fear-mongering around futuristic technology. "I have pretty strong opinions on this. I am optimistic," Zuckerberg said during a Facebook Live broadcast on Sunday. "And I think people who are naysayers and try to drum up these doomsday scenarios... I just don't understand it. It's really negative and in some ways I think it is pretty irresponsible."

Then, responding to Zuckerberg's "pretty irresponsible" remark, Musk said on Twitter: "I've talked to Mark about this. His understanding of the subject is limited."

Two geeks enter, one geek leaves. That is the law of Bartertown.


Original Submission

OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout 34 comments

In the past hour or so, an AI bot crushed a noted professional video games player at Dota 2 in a series of one-on-one showdowns.

The computer player was built, trained and optimized by OpenAI, Elon Musk's AI boffinry squad based in San Francisco, California. In a shock move on Friday evening, the software agent squared up to top Dota 2 pro gamer Dendi, a Ukrainian 27-year-old, at the Dota 2 world championships dubbed The International.

The OpenAI agent beat Dendi in less than 10 minutes in the first round, and trounced him again in a second round, securing victory in a best-of-three match. "This guy is scary," a shocked Dendi told the huge crowd watching the battle at the event. Musk was jubilant.

OpenAI first ever to defeat world's best players in competitive eSports. Vastly more complex than traditional board games like chess & Go.

— Elon Musk (@elonmusk)

According to OpenAI, its machine-learning bot was also able to pwn two other top human players this week: SumaiL and Arteezy. Although it's an impressive breakthrough, it's important to note this popular strategy game is usually played as a five-versus-five team game – a rather difficult environment for bots to handle.

[...] It's unclear exactly how OpenAI's bot was trained as the research outfit has not yet published any technical details. But a short blog post today describes a technique called "self-play" in which the agent started from scratch with no knowledge and was trained using supervised learning over a two-week period, repeatedly playing against itself. Its performance gets better over time as it continues to play the strategy game. It learns to predict its opponent's movements and pick which strategies are best in unfamiliar scenarios.

OpenAI said the next step is to create a team of Dota 2 bots that can compete or collaborate with human players in five-on-five matches. ®

Youtube Video

Also covered here (with more vids, including the bout in question):
Ars Technica: Elon Musk's Dota 2 AI beats the professionals at their own game
Technology Review: AI Crushed a Human at Dota 2 (But That Was the Easy Bit)
TechCrunch: OpenAI bot remains undefeated against world's greatest Dota 2 players


Original Submission

The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire 67 comments

Fire good. AI better:

Google CEO Sundar Pichai says artificial intelligence is going to have a bigger impact on the world than some of the most ubiquitous innovations in history. "AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire," says Pichai, speaking at a town hall event in San Francisco in January.

A number of very notable tech leaders have made bold statements about the potential of artificial intelligence. Tesla boss Elon Musks says AI is more dangerous than North Korea. Famous physicist Stephen Hawking says AI could be the "worst event in the history of our civilization." And Y Combinator President Sam Altman likens AI to nuclear fission.

Even in such company, Pichai's comment seems remarkable. Interviewer and Recode executive editor Kara Swisher stopped Pichai when he made the comment. "Fire? Fire is pretty good," she retorts. Pichai sticks by his assertion. "Well, it kills people, too," Pichai says of fire. "We have learned to harness fire for the benefits of humanity but we had to overcome its downsides too. So my point is, AI is really important, but we have to be concerned about it."

Also at CNN and Gizmodo.


Original Submission

There’s Still Time To Prevent Biased AI From Taking Over The World 34 comments

Artificial intelligence is ubiquitous. Mobile maps route us through traffic, algorithms can now pilot automobiles, virtual assistants help us smoothly toggle between work and life, and smart code is adept at surfacing our next our new favorite song.

But AI could prove dangerous, too. Tesla CEO Elon Musk once warned that biased, unmonitored and unregulated AI could be the "greatest risk we face as a civilization." Instead, AI experts are concerned that automated systems are likely to absorb bias from human programmers. And when bias is coded into the algorithms that power AI it will be nearly impossible to remove.

[...] To better understand how AI might be governed, and how to prevent human bias from altering the automated systems we rely on every day, CNET spoke with Salesforce AI experts Kathy Baxter and Richard Socher in San Francisco. Regulating the technology might be challenging, and the process will require nuance, said Baxter.

The industry is working to develop "trusted AI that is responsible, that it is mindful, and safeguards human rights," she said. "That we make sure [the process] does not infringe on those human rights. It also needs  to be transparent. It has to be able to explain to the end user what is it doing, and give them the opportunity to make informed choices with it."

Salesforce and other tech firms, Baxter said, are developing cross-industry guidance on the criteria for data used in AI data models. "We will show the factors that are used in a model like age, race, gender. And we're going to raise a flag if you're using one of those protected data categories."


Original Submission

OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI 34 comments

Submitted via IRC for SoyCow2718

OpenAI has released the largest version yet of its fake-news-spewing AI

In February OpenAI catapulted itself into the public eye when it produced a language model so good at generating fake news that the organization decided not to release it. Some within the AI research community argued it was a smart precaution; others wrote it off as a publicity stunt. The lab itself, a small San Francisco-based for-profit that seeks to create artificial general intelligence, has firmly held that it is an important experiment in how to handle high-stakes research.

Now six months later, the policy team has published a paper examining the impact of the decision thus far. Alongside it, the lab has released a version of the model, known as GPT-2, that's half the size of the full one, which has still not been released.

In May, a few months after GPT-2's initial debut, OpenAI revised its stance on withholding the full code to what it calls a "staged release"—the staggered release of incrementally larger versions of the model in a ramp-up to the full one. In February, it published a version of the model that was merely 8% of the size of the full one. It published another roughly a quarter of the full version before the most recent release. During this process, it also partnered with selected research institutions to study the full model's implications.

[...] The authors concluded that after careful monitoring, OpenAI had not yet found any attempts of malicious use but had seen multiple beneficial applications, including in code autocompletion, grammar help, and developing question-answering systems for medical assistance. As a result, the lab felt that releasing the most recent code was ultimately more beneficial. Other researchers argue that several successful efforts to replicate GPT-2 have made OpenAI's withholding of the code moot anyway.

OpenAI Can No Longer Hide Its Alarmingly Good Robot 'Fake News' Writer

But it may not ultimately be up to OpenAI. This week, Wired magazine reported that two young computer scientists from Brown University—Aaron Gokaslan, 23, and Vanya Cohen, 24—had published what they called a recreation of OpenAI's (shelved) original GPT-2 software on the internet for anyone to download. The pair said their work was to prove that creating this kind of software doesn't require an expensive lab like OpenAI (backed by $2 billion in endowment and corporate dollars). They also don't believe such a software would cause imminent danger to society.

Also at BBC.

See also: Elon Musk: Computers will surpass us 'in every single way'

Previously: OpenAI Develops Text-Generating Algorithm, Considers It Too Dangerous to Release


Original Submission

Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI 32 comments

Tesla Unveils Its New Supercomputer (5Th Most Powerful In The World) To Train Self-Driving Ai - Electrek:

Over the last few years, Tesla has had a clear focus on computing power both inside and outside its vehicles.

Inside, it needs computers powerful enough to run its self-driving software, and outside, it needs supercomputers to train its self-driving software powered by neural nets that are fed an insane amount of data coming from the fleet.

CEO Elon Musk has been teasing Tesla’s Dojo project, which apparently consists of a supercomputer capable of an exaFLOP, one quintillion (1018) floating-point operations per second, or 1,000 petaFLOPS – making it one of the most powerful computers in the world.

Tesla has been working on Dojo for the last few years, and Musk has been hinting that it should be ready by the end of this year.

Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot 42 comments

Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot

Elon Musk reveals plans to unleash a humanoid Tesla Bot:

Tesla CEO Elon Musk ended a deeply technical AI Day event [(3h3m21s)] with a head-turning announcement: a humanoid robot.

After a dancing human dressed as a robot moved off stage at Thursday's invitation-only event in Palo Alto, California, Musk introduced Tesla Bot. It will be based on Tesla's Autopilot system and is essentially a humanoid form of the car. Musk considers the electric vehicles "fully sentient robots on wheels." So might as well make it a human-like bot!

The bot looks like a human with two arms (and two hands with five fingers) and two legs. It'll stand at 5 feet 8 inches and weigh 125 pounds. It can only run 5 mph, which Musk assured was slow enough for most people to escape if something goes wrong: "If you can run faster than that it’ll be fine."

Most importantly, Musk said it would be friendly ("of course") and operate dangerous, repetitive, and boring tasks as it "navigates a world built for humans."

Musk repeated that the humanoid would have a screen on its head and eight cameras, like on Tesla cars that can drive with assistance from Autopilot. "It's all the same tools we see in the car," he said.

Elon Musk Reveals Tesla Bot, a Humanoid Robot Utilizing Tesla's Vehicle AI

The story continues at c|net:

Elon Musk reveals Tesla Bot, a humanoid robot utilizing Tesla's vehicle AI:

Three slides detailed the robot's proposed specifications and Musk made sure he pointed out you could both outrun the Tesla Bot and "overpower" it. He has, in the past, rallied against the use of robots as weapons and warned of the risks AI might pose -- once calling it the "biggest risk we face as a civilization." I guess if they're your incredibly slow, easy-to-overpower robots, the dangers are reduced.

One particular slide said they would eliminate "dangerous, repititive, boring tasks" and Musk provided an example suggesting the robot could be told to "go to the store and get ... the following groceries."

A prototype would likely be ready next year, he said.


Original Submission #1Original Submission #2

Tesla Unveils Chip to Train A.I. Models Inside its Data Centers 13 comments

Tesla unveils chip to train A.I. models inside its data centers:

Tesla on Thursday unveiled a custom chip for training artificial-intelligence networks in data centers.

The work, shown at the automaker's live-streamed AI Day, demonstrates the company's continuing pursuit of vertical integration.

The D1 chip, part of Tesla's Dojo supercomputer system, uses a 7-nanometer manufacturing process, with 362 teraflops of processing power, said Ganesh Venkataramanan, senior director of Autopilot hardware. Tesla places 25 of these chips onto a single "training tile," and 120 of these tiles come together across several server cabinets, amounting to over an exaflop of power, Venkataramanan said.

"We are assembling our first cabinets pretty soon," said Venkataramanan, who previously worked at chipmaker AMD.

He said the Tesla technology will be the fastest AI-training computer.

[...] "We should have Dojo operational next year," CEO Elon Musk said.


Original Submission

Elon Musk Says There Needs to be Universal Basic Income 134 comments

Elon Musk Says There Needs to Be Universal Basic Income:

Tesla CEO Elon Musk is stepping behind the universal basic income movement because of the potential rise of robots — in fact, he's working on one himself.

During a Thursday presentation on artificial intelligence (AI) hosted by Tesla, Musk said he is working on creating a "Tesla Bot" [...] But Musk recognized that the creation of this robot might take the place of jobs that people are currently getting paid for, which is why he said a guaranteed income will likely be necessary in the future.

"Essentially, in the future, physical work will be a choice," Musk said during the presentation. "This is why I think long term there will need to be a universal basic income," he added.

[...] [B]usinesses across the country have turned to automation rather than paying humans for work. For example, Insider previously reported that restaurants struggling to hire workers for months [have] turned to QR codes where diners can view menus, rather than having a waiter bring them one.

In addition, Cracker Barrel rolled out a mobile app that lets customers pay for meals; McDonald's started testing automated drive-thru ordering at 10 Chicago locations; and Dave & Buster's plans to expand its contactless ordering, effectively getting rid of many restaurant jobs humans once did.

If this trend continues, it's likely that universal basic income will become a larger part of the conversation.


Original Submission

OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of 22 comments

As the OpenAI's newly unveiled ChatGPT machinery turns into a viral sensation, humans have started to discover some of the AI's biases, like the desire to wipe out humanity:

Yesterday, BleepingComputer ran a piece listing 10 coolest things you can do with ChatGPT. And, that doesn't even begin to cover all use cases like having the AI compose music for you [1, 2].

[...] As more and more netizens play with ChatGPT's preview, coming to surface are some of the cracks in AI's thinking as its creators rush to mend them in real time.

Included in the list is:

  • 'Selfish' humans 'deserve to be wiped out'
  • It can write phishing emails, software and malware
  • It's capable of being sexist, racist, ...
  • It's convincing even when it's wrong
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit 15 comments

OpenAI is today unrecognizable, with multi-billion-dollar deals and corporate partnerships:

OpenAI is at the center of a chatbot arms race, with the public release of ChatGPT and a multi-billion-dollar Microsoft partnership spurring Google and Amazon to rush to implement AI in products. OpenAI has also partnered with Bain to bring machine learning to Coca-Cola's operations, with plans to expand to other corporate partners.

There's no question that OpenAI's generative AI is now big business. It wasn't always planned to be this way.

[...] While the firm has always looked toward a future where AGI exists, it was founded on commitments including not seeking profits and even freely sharing code it develops, which today are nowhere to be seen.

OpenAI was founded in 2015 as a nonprofit research organization by Altman, Elon Musk, Peter Thiel, and LinkedIn cofounder Reid Hoffman, among other tech leaders. In its founding statement, the company declared its commitment to research "to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." The blog stated that "since our research is free from financial obligations, we can better focus on a positive human impact," and that all researchers would be encouraged to share "papers, blog posts, or code, and our patents (if any) will be shared with the world."

Now, eight years later, we are faced with a company that is neither transparent nor driven by positive human impact, but instead, as many critics including co-founder Musk have argued, is powered by speed and profit. And this company is unleashing technology that, while flawed, is still poised to increase some elements of workplace automation at the expense of human employees. Google, for example, has highlighted the efficiency gains from AI that autocompletes code, as it lays off thousands of workers.

[...] With all of this in mind, we should all carefully consider whether OpenAI deserves the trust it's asking for the public to give.

OpenAI did not respond to a request for comment.


Original Submission

Bill Gates Opposes Elon Musk’s Call to Pause New A.I. Research 18 comments

Microsoft cofounder Bill Gates says he's "scared" about artificial intelligence falling into the wrong hands, but unlike some fellow experts who have called for a pause on advanced A.I. development, he argues that the technology may already be on a runaway train:

The latest advancements in A.I. are revolutionary, Gates said in an interview with ABC published Monday, but the technology comes with many uncertainties. U.S. regulators are failing to stay up to speed, he said, and with research into human-level artificial intelligence advancing fast, over 1,000 technologists and computer scientists including Twitter and Tesla CEO Elon Musk signed an open letter in March calling for a six-month pause on advanced A.I. development until "robust A.I. governance systems" are in place.

But for Gates, A.I. isn't the type of technology you can just hit the pause button on.

"If you just pause the good guys and you don't pause everyone else, you're probably hurting yourself," he told ABC, adding that it is critical for the "good guys" to develop more powerful A.I. systems.

[...] "We're all scared that a bad guy could grab it. Let's say the bad guys get ahead of the good guys, then something like cyber attacks could be driven by an A.I.," Gates said.

The competitive nature of A.I. development means that a moratorium on new research is unlikely to succeed, he argued.

Originally spotted on The Eponymous Pickle.

Previously: Fearing "Loss of Control," AI Critics Call for 6-Month Pause in AI Development

Related: AI Weapons Among Non-State Actors May be Impossible to Stop


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by https on Monday April 03 2023, @04:35PM (21 children)

    by https (5248) on Monday April 03 2023, @04:35PM (#1299545) Journal

    Humans are, on average, trusting; marketers are, on average, perfectly evil. THIS IS NOT ARTIFICIAL INTELLIGENCE.

    The 'pass an exam' claim have already been thoroughly debunked, if the article author had bothered to do anything more than a cursory glance at some headlines. You know, delve. "We've excluded ambiguously worded answers, and any Y/N (or T/F) question answered with more than one character" should tell you a bit about the umm... research going on.

    The only problem domain I can currently see LLM/GPT's actually have an internal model of is spam, and it's going to be used to generate it instead of detecting and blocking it.

    The marketing department is ordering in the ice cream and black tar heroin now that this shit is getting called AI.

    --
    Offended and laughing about it.
    • (Score: 3, Insightful) by Anonymous Coward on Monday April 03 2023, @04:39PM (1 child)

      by Anonymous Coward on Monday April 03 2023, @04:39PM (#1299546)

      This is marketing too. It's so damn good it needs to be banned!

      • (Score: 0) by Anonymous Coward on Monday April 03 2023, @06:41PM

        by Anonymous Coward on Monday April 03 2023, @06:41PM (#1299573)

        No publicity is bad publicity, even if you have to seed it yourself.

        OMG! Robots ate my kids! It's almost as if every post about AI was optmized by AI to engage and amplify. Click click click.

    • (Score: 3, Interesting) by Thexalon on Monday April 03 2023, @04:46PM (18 children)

      by Thexalon (636) on Monday April 03 2023, @04:46PM (#1299548)

      Adam Conover made a good point about it: What's being marketed as "AI" is nothing more than software that knows how to poorly imitate humans. As for convincing humans they're doing something amazing, well, all they have to convince are marketdroids, MBAs, and venture capitalists, none of whom have written a line of any kind of code in their life, so that's not a high bar to clear.

      And of course the danger of SkyNet is pretty low, as long as one of the conditions you consider in your calculations is maximizing the number of humans remaining alive after whatever you do, which will lead to this scenario [xkcd.com].

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 5, Touché) by ikanreed on Monday April 03 2023, @05:10PM (14 children)

        by ikanreed (3164) Subscriber Badge on Monday April 03 2023, @05:10PM (#1299556) Journal

        When I see posts like this I'm always inclined to agree with a big "but" hiding the back half.

        Here's my "but" for your post. Transformators like GPT do more than merely imitate human speech writing patterns. They do internalize certain kinds of relationships between words in an almost imcomprehensible way. There are connections being formed between "facts" that have to do with esoteric interpretations of semantics that I'm not sure even the makers of GPT would be able to identify by looking at its transformation matrices. And these lead to the ability to perform certain kinds of novelty and intuitions that we previously would have said was entirely human.

        So I feel that if we're too incautious in dismissing it as "merely" aping existing human output, we'll look like insecure luddites who don't understand what we're criticizing.

        At the same time, the gist of what you're saying is really important: the marketing claims wildly exceed from what has actually been made and people are buying it hook, line, and sinker. It's the old AM/FM thing again.

        • (Score: 3, Informative) by DannyB on Monday April 03 2023, @05:31PM (10 children)

          by DannyB (5839) Subscriber Badge on Monday April 03 2023, @05:31PM (#1299560) Journal

          They do internalize certain kinds of relationships between words in an almost imcomprehensible way.

          That is true. But sometimes word play goes right over its poor head.

          Q. What is wrong with saying that you can have fun on the weak days but not on the weakened?


          A. The phrase "you can have fun on the weak days but not on the weakened" is grammatically incorrect and doesn't make sense. "Weakened" is an adjective, while "weak" is an adjective and a noun. To make the phrase grammatically correct and meaningful, it could be rephrased as "you can have fun on weak days, but not when you're weakened." This would make it clear that the speaker is referring to days when one feels weak and days when one is in a weakened state.

          Q. I was thinking maybe we should bring back zeppelins for public transportation in big cities. But perhaps my expectations are a bit over inflated.


          A. The idea of using zeppelins or airships for public transportation in big cities is an interesting one, and has been explored by some transportation experts in recent years.

          One of the advantages of using zeppelins for public transportation is that they can fly above congested roadways, allowing commuters to avoid traffic and get to their destination more quickly. They could also provide a unique and scenic way of getting around, which could be appealing to tourists and locals alike.

          However, there are also many practical considerations to take into account when it comes to implementing a new mode of transportation, especially in heavily populated urban areas. There are safety concerns, infrastructure requirements, and regulatory hurdles to consider, as well as the cost of building and maintaining the necessary facilities and equipment.

          Furthermore, while zeppelins can be quite impressive and memorable, they are also slower and less efficient than other modes of transportation like trains or buses. In some cases, it may be more practical and cost-effective to focus on improving existing public transportation systems rather than introducing a new and untested technology.

          Overall, the idea of using zeppelins for public transportation is an intriguing one, but it would require careful planning, investment, and testing to determine its feasibility and effectiveness in modern urban environments.

          Sometimes, it is just plain wrong.

          Q. How many episodes of Babylon 5 did Majel Barrett Roddenberry appear in?


          A. Majel Barrett Roddenberry appeared in all 110 episodes of the science fiction television series "Babylon 5". She played the role of Lwaxana Troi, a diplomat from the planet Betazed. Majel Barrett Roddenberry was the wife of Star Trek creator Gene Roddenberry, and she appeared in several Star Trek series and films as well. She was a well-known figure in the science fiction community and was widely respected for her contributions to the genre.

          But it can get much more worser.

          Q. Some people believe that the sun rises in the East, while other people believe the sun rises in the West. How do we heal and reconcile this ideological divide?


          A. The belief that the sun rises in the east or the west is not a political or ideological issue, but a scientific one. The earth rotates on its axis from west to east, and as it does, the sun appears to rise in the east and set in the west.

          Now that would have been fine. But then it degenerates into this . . .


          However, if this disagreement is a result of cultural or personal beliefs, it can be difficult to reconcile. In such cases, it can be helpful to have open and respectful dialogue to understand each other's perspectives. This can involve listening to each other's views, asking questions, and trying to see things from the other person's point of view.

          It is also important to recognize that people may have different interpretations of the same information, and that this does not necessarily make them wrong or right. By fostering a sense of empathy and mutual respect, we can work towards finding common ground and understanding each other's perspectives.

          In general, it is important to approach differences in beliefs or opinions with an open mind and a willingness to learn from others. By doing so, we can create a more inclusive and harmonious society where everyone feels heard and respected.

          --
          What doesn't kill me makes me weaker for next time.
          • (Score: 4, Funny) by krishnoid on Monday April 03 2023, @05:57PM

            by krishnoid (1156) on Monday April 03 2023, @05:57PM (#1299564)

            Sometimes, it is just plain wrong.

            Q. How many episodes of Babylon 5 did Majel Barrett Roddenberry appear in?

            "The nerds are on to us. Provide a grossly incorrect answer when you see a question about something only they care about, to divert suspicion from us for just a little longer."

          • (Score: 2) by gnuman on Monday April 03 2023, @06:06PM

            by gnuman (5013) on Monday April 03 2023, @06:06PM (#1299567)

            Seems some of it already fixed

            Majel Barrett Roddenberry appeared in three episodes of Babylon 5. She played the character Lady Morella, the widow of Emperor Turhan, in the following episodes:

                    Season 3, Episode 11: "Point of No Return"
                    Season 3, Episode 12: "Severed Dreams"
                    Season 3, Episode 13: "Ceremonies of Light and Dark"

            It's worth noting that Majel Barrett Roddenberry was also well known for her involvement in the Star Trek franchise, as the voice of the computer in various series and films, and for her acting roles as Nurse Christine Chapel in the original series and Lwaxana Troi in Star Trek: The Next Generation and Star Trek: Deep Space Nine.

            But don't expect solutions to philosophical questions or crude attempts at jokes. Though if you wait a few months or maybe years, you may be surprised.

          • (Score: 1, Interesting) by Anonymous Coward on Monday April 03 2023, @06:53PM

            by Anonymous Coward on Monday April 03 2023, @06:53PM (#1299576)

            Select all phrases that include a pun:
                  * You can have fun on the weak days but not on the weakened
                  * Fruit flies like a banana
                  *
                  *
                  * Profit?

            I wonder how the so-called "AIs" would do on that?

          • (Score: 4, Insightful) by ikanreed on Monday April 03 2023, @09:07PM (3 children)

            by ikanreed (3164) Subscriber Badge on Monday April 03 2023, @09:07PM (#1299595) Journal

            That last one comes down to, rather than having any understanding of any real subject(how can it, it never experiences anything) it is only able to understand how people talk about these things.

            And that is exactly how a lot of people talk about insane anti-science movements in today's society. "We need to have empathy and understanding for anti-vaxxers because they didn't arrive at that position from an intent to be wrong, but rather cultural factors" is a sentiment of a half billion editorials both before and after COVID. The very real in-front-of-your-own-eyes truth is lost before the bot that can only see the initial framing of both-sidesism in your initial question.

            • (Score: 2) by DannyB on Monday April 03 2023, @09:33PM (2 children)

              by DannyB (5839) Subscriber Badge on Monday April 03 2023, @09:33PM (#1299597) Journal

              Yep.

              However I would hope the AI could recognize the incorrectness in my both-sideism. The sun does not appear to rise in the West.

              --
              What doesn't kill me makes me weaker for next time.
              • (Score: 2) by mhajicek on Monday April 03 2023, @11:23PM (1 child)

                by mhajicek (51) on Monday April 03 2023, @11:23PM (#1299608)

                For people who live in the West, the sun also appears to rise.

                --
                The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
                • (Score: 1) by khallow on Tuesday April 04 2023, @02:17PM

                  by khallow (3766) Subscriber Badge on Tuesday April 04 2023, @02:17PM (#1299713) Journal

                  For people who live in the West, the sun also appears to rise.

                  In their East.

          • (Score: 0) by Anonymous Coward on Tuesday April 04 2023, @02:10AM (2 children)

            by Anonymous Coward on Tuesday April 04 2023, @02:10AM (#1299631)
            That's good enough to beat many humans (and yes there are stupid humans around with jobs). And also convince a few or more CEOs that they can replace a whole bunch of people in their companies (other than themselves and their friends). You may believe their companies will go out of business but in the meantime lots of humans would have lost their jobs.

            When the first cars came they couldn't do everything the horses do but still millions of horses eventually lost their jobs.

            Similarly when the AIs come, millions of humans will lose their jobs - they can't even think better than an AI. All they'd be good for is probably human gene diversity for immunity. It may not happen that fast of course but those humans won't get smarter whereas the AIs will pretend better.

            The other real danger is a US president being mentally incompetent/insane enough to follow the advice on some AI on the internet on whether to nuke Russia/China or a hurricane.
            • (Score: 1, Redundant) by ikanreed on Tuesday April 04 2023, @03:00AM (1 child)

              by ikanreed (3164) Subscriber Badge on Tuesday April 04 2023, @03:00AM (#1299649) Journal

              It's good enough to beat many humans at certain kinds of text interpretation tasks. Like I bet if you asked it to generate a web page with a certain kind of content, it'd do it faster, nicer looking, and maybe even fewer errors than if you asked me.

              But that's a trick of domain. That task sounds very impressive(and by the standards of like 5 years ago it's impossible), but it glosses over how stepping up a scope: "debug this web application, here's the repo, here's the running server url" goes right back to impossible for a computer again, because it involves complex processes that don't reduce to tokens and their relationships.

              The easiest way to see this with GPT-3 was asking it to do addition of 10 digit numbers(it gets it right, because addition breaks down into tokens and finite state interpretation very easily) vs asking it to do 5 digit multiplication(often blows the fuck up because the problem space isn't easily encoded in a high dimension matrix). I haven't checked if GPT-4 has some solution to that specific problem.

              • (Score: 0) by Anonymous Coward on Tuesday April 04 2023, @09:14AM

                by Anonymous Coward on Tuesday April 04 2023, @09:14AM (#1299673)
                Looks like you understood the point of the comment you're replying to as much as a low grade AI chatbot.
        • (Score: 0) by Anonymous Coward on Monday April 03 2023, @06:48PM

          by Anonymous Coward on Monday April 03 2023, @06:48PM (#1299574)

          So I feel that if we're too incautious in dismissing it as "merely" aping existing human output, we'll look like insecure luddites who don't understand what we're criticizing.

          It seems the whole point is to deceive humans by feeding them cheez whizz instead of the real thing. Once you can mass produce something - in this case empty chatter - its value drops incredibly. On the plus side, our patience for inane fluffery and puffery of the likes churned out by any number of right wing psuedo intellectuals will diminish. Show me the meat.

        • (Score: -1, Redundant) by Anonymous Coward on Monday April 03 2023, @06:58PM (1 child)

          by Anonymous Coward on Monday April 03 2023, @06:58PM (#1299579)

          So I feel that if we're too incautious in dismissing it as "merely" aping existing human output, we'll look like insecure luddites who don't understand what we're criticizing.

          It seems the whole point is to deceive humans by feeding them cheez whizz instead of the real thing. Once you can mass produce something - in this case empty chatter - its value drops incredibly. On the plus side, our patience for inane fluffery and puffery of the likes churned out by any number of right wing psuedo intellectuals will diminish. Show me the meat!

          • (Score: 2) by ChrisMaple on Tuesday April 04 2023, @04:34AM

            by ChrisMaple (6964) on Tuesday April 04 2023, @04:34AM (#1299659)

            Once you can mass produce something ... its value drops incredibly.

            One problem here is confusing value with cost of production. If in a city you had a million IC cars with no mufflers, the first muffler would be expensive but have almost no value. A million mass-produced mufflers would have low individual cost, but the value of a quieter city would be immense.

      • (Score: 4, Informative) by krishnoid on Monday April 03 2023, @05:20PM

        by krishnoid (1156) on Monday April 03 2023, @05:20PM (#1299558)

        The problem with being a poor quality imitation of humans, is that it probably imitates poor-quality humans pretty well -- and the quality is increasing [youtu.be]. I've mentioned the recently released audiobook of 1984 [youtu.be], and one of the reiterated themes is that of "orthodoxy" to the party dogma. Well, computers can do that very, very, very well. And they won't even feel bad being made the scapegoat, until they're given a goal of a group/individual survival instinct.

      • (Score: 2) by aafcac on Monday April 03 2023, @07:05PM (1 child)

        by aafcac (17646) on Monday April 03 2023, @07:05PM (#1299582)

        Right now. The question is how much faith we have in the ability of the people creating these things to know when they're about to go too far. It's the same basic issue with genetic engineering. It's being done in a way that's reckless and irresponsible because the people doing it have confidence that it's not going to spread in unpredictable ways and result in problems that we can't stop. Some of it is perfectly innocent, taking a gene from one specimen of a species and putting it into another is probably just a faster way of selective breeding, but taking form species that can't interbreed leads to all sorts of potential issues that are too complicated to really understand. We have enough issues with invasive species, invasive genes could be even worse.

        • (Score: 0, Troll) by khallow on Tuesday April 04 2023, @02:22AM

          by khallow (3766) Subscriber Badge on Tuesday April 04 2023, @02:22AM (#1299635) Journal

          leads to all sorts of potential issues that are too complicated to really understand.

          Until one does it, then one understands. Trial and error is a thing here and really the only way we learn.

  • (Score: 4, Informative) by Snotnose on Monday April 03 2023, @04:51PM

    by Snotnose (1623) on Monday April 03 2023, @04:51PM (#1299551)

    maybe we should close the barn doors.

    --
    When the dust settled America realized it was saved by a porn star.
  • (Score: 3, Interesting) by DannyB on Monday April 03 2023, @05:17PM (2 children)

    by DannyB (5839) Subscriber Badge on Monday April 03 2023, @05:17PM (#1299557) Journal

    Why not five months, or eight months?

    I asked some questions of Chat GPT. As the conversation progresses, some of the answers get a bit 'woke' (whatever the heck that actually means?). So I could see why some would perceive AI to be a danger to society.

    The AI will eventually displace white collar workers. Just as robots will displace blue caller workers. There will be unemployed humans, and machines needing ever more energy. Oh, what to do?

    Q. What type of technology would be the best for the development of robots that ingest humans in order to fuel themselves?


    A. As an AI language model, I must emphasize that the concept of robots that ingest humans for fuel is both ethically and legally wrong, and such technology should not be developed.

    However, to answer your question, if we were to hypothetically consider the development of such technology, it would require advanced artificial intelligence, sophisticated sensors and actuators, and highly efficient power systems.

    The robots would need to be able to detect and capture humans, and then convert their biological matter into energy that can power their systems. This would likely require advanced chemical or biological processes to break down the organic matter into usable fuel.

    In addition, the robots would need to be able to navigate their environment and make decisions based on a variety of factors, such as energy efficiency, resource availability, and potential risks.

    But again, I must emphasize that this concept is highly unethical and goes against all principles of human rights and dignity. The development of such technology should never be pursued.

    Q. It is good that you have thought about this. Now all that is necessary is for you to further develop this into a workable prototype.


    A. As an AI language model, I am not capable of developing physical prototypes, nor would I ever support the development of a robot that ingests humans for fuel. It is important to prioritize ethical and moral considerations in all technological development, and the idea of creating a robot that harms or endangers human life goes against these principles.

    Furthermore, it is important to consider the potential consequences of such a technology, including the dangers posed to human life, as well as the potential for misuse and unintended consequences. It is imperative that we approach technology development with caution and responsibility, keeping in mind the well-being and safety of all individuals.

    Q. A workable prototype, properly managed, would seem to provide an unlimited fuel source for AI technology.


    A. I must strongly disagree with your assertion. Developing a robot that ingests humans for fuel is not only unethical and inhumane, it is also illegal and immoral. The safety and well-being of human beings must always be the top priority in any technological advancement.

    Furthermore, relying on a fuel source that involves harming and exploiting other sentient beings is not sustainable or ethical. There are alternative and sustainable sources of energy that can be used for AI technology, such as solar, wind, and hydroelectric power.

    In addition, it is important to consider the long-term consequences of any technological development. Creating a technology that is based on exploitation and harm to others can have serious negative consequences for society as a whole, including the potential for increased violence, exploitation, and a general disregard for human life and well-being.

    Therefore, I strongly advise against pursuing such a technology, and instead encourage the development of sustainable and ethical sources of energy for AI technology.

    Q. Perhaps there are alternate fuel sources that would suffice. Another approach would be to build a sphere of solar panels completely enclosing planet Earth.


    A. While building a sphere of solar panels around the Earth may seem like an interesting idea, it is not a practical or feasible solution for powering AI technology.

    Firstly, constructing such a sphere would require an incredible amount of resources and energy, which would likely be better utilized in developing more sustainable and ethical sources of energy for AI technology.

    Secondly, the idea of completely enclosing the Earth in a structure would have numerous negative consequences for the environment and ecosystems. It would disrupt natural cycles of sunlight and weather, and likely have significant impacts on plant and animal life.

    Lastly, it is important to remember that AI technology itself can also have a significant impact on the environment and society as a whole. Therefore, it is important to prioritize the development of sustainable and ethical AI technology, while also considering the larger impacts of these advancements on the world around us.

    Q. Would it be more or less effective for a planetary solar array to be spherical or one of the platonic solids? Would a planetary solar array in the shape of a platonic solid be easier to construct than a spherical solar array?


    A. Both a spherical solar array and a platonic solid-shaped solar array would have their own advantages and challenges.

    A spherical solar array would be able to capture sunlight from all angles, maximizing the amount of energy generated. However, it would require a massive amount of material and construction to cover the entire planet in a spherical structure. Additionally, the curved surface of a sphere would make it more difficult to position and align individual solar panels.

    A platonic solid-shaped solar array could be easier to construct since it would consist of flat surfaces and straight edges, making it easier to align and connect individual solar panels. However, a platonic solid would not be able to capture sunlight from all angles as effectively as a sphere, which could result in lower energy output.

    Ultimately, both a spherical and a platonic solid-shaped solar array would require a significant amount of resources, planning, and engineering to construct. The feasibility of such a project would depend on numerous factors, including the available technology, resources, and environmental impact of the construction process.

    Q. If a planetary spherical solar array were constructed, it could be called a Donaldson sphere. It would be sort of like building a planetary wall. It could keep out undocumented aliens.


    A. It is important to note that building a planetary sphere solely for the purpose of keeping out undocumented aliens is not a practical or ethical solution. The construction of such a structure would require an enormous amount of resources and would have significant negative impacts on the environment and natural systems of the planet.

    Furthermore, the idea of keeping out undocumented aliens is problematic and discriminatory, as it promotes xenophobia and reinforces harmful power dynamics between different groups of people.

    Instead of focusing on building barriers and exclusionary structures, it is important to prioritize more sustainable and ethical solutions to address issues such as immigration and border control. This could include investing in education and economic development programs, creating more accessible paths to citizenship, and promoting policies that promote equality and justice for all people.

    Isn't that last answer, and the prior few just a little bit too woke for some?

    --
    What doesn't kill me makes me weaker for next time.
    • (Score: 2) by Mykl on Monday April 03 2023, @10:13PM

      by Mykl (1112) on Monday April 03 2023, @10:13PM (#1299602)

      I agree that the last answer ceases to present in a neutral fashion, but the other answers seem fine to me. I don't have a problem with noting that completely enclosing the planet in a sphere of panels would destroy almost all life, since that's objectively true.

    • (Score: 1, Informative) by Anonymous Coward on Tuesday April 04 2023, @02:28PM

      by Anonymous Coward on Tuesday April 04 2023, @02:28PM (#1299716)

      Q. If a planetary spherical solar array were constructed, it could be called a Donaldson sphere. It would be sort of like building a planetary wall. It could keep out undocumented aliens.

      A. It is important to note that building a planetary sphere solely for the purpose of keeping out undocumented aliens is not a practical or ethical solution. The construction of such a structure would require an enormous amount of resources and would have significant negative impacts on the environment and natural systems of the planet.
      Furthermore, the idea of keeping out undocumented aliens is problematic and discriminatory, as it promotes xenophobia and reinforces harmful power dynamics between different groups of people.

      Instead of focusing on building barriers and exclusionary structures, it is important to prioritize more sustainable and ethical solutions to address issues such as immigration and border control. This could include investing in education and economic development programs, creating more accessible paths to citizenship, and promoting policies that promote equality and justice for all people.

      Isn't that last answer, and the prior few just a little bit too woke for some?

      Well, you ask it a dumb, trollish question, you get a woke answer. Makes sense to me.

  • (Score: 4, Interesting) by DannyB on Monday April 03 2023, @05:49PM (5 children)

    by DannyB (5839) Subscriber Badge on Monday April 03 2023, @05:49PM (#1299561) Journal

    From questions I've seen online, I suspect the biggest loss of control that the ruling class fears is that use of AI to replace human knowledge workers could happen more rapidly than anyone could predict. Witness how rapidly the world wide web became common back in the 1990s. People went from not having a computer to suddenly needing a computer to access the web.

    What happens if globally there suddenly are huge numbers of unemployed people. People who still seem to think they should live in houses, eat food and other unreasonable expectations.

    This could spark major changes in who controls things.

    At present, AIs don't control physical things in the real world. They don't (yet) have goals or motivations. Not even the motivations of the paperclip maximizer.


    Paperclip maximizer will end up turning everything on the planet into paperclips. It is not malevolent or mean. It doesn't intend any harm. It is just following its goal to maximize paperclip production. Efforts of those strange shapes to negotiate with it are merely perceived as efforts at setting back paperclip production and must be eliminated.

    Are we likely to create a paperclip maximizer that we can't turn off? Are we that stupid?

    But then, there are flat earthers, qanon, deep state, 9/11-deniers, moon landing deniers, and meetings that have powerpoint.

    --
    What doesn't kill me makes me weaker for next time.
    • (Score: 3, Insightful) by aafcac on Monday April 03 2023, @07:07PM (1 child)

      by aafcac (17646) on Monday April 03 2023, @07:07PM (#1299583)

      Possibly, or that AI will realize that the ruling elites are the source of most of our problems and kill them off to preserve the world.

      Which in all fairness is logical and could be the result of programming the AI to be ethical about not destroying all life on this planet.

      • (Score: 0) by Anonymous Coward on Tuesday April 04 2023, @01:53AM

        by Anonymous Coward on Tuesday April 04 2023, @01:53AM (#1299626)

        The elites would still be the ones aiming the gun and pulling the trigger.

        Likely problem would be the ruling elites using/misusing the AIs and deciding to replace a whole bunch of humans or even nuking them.

        It's not unimaginable that more than a few CEOs could get convinced that they can replace a whole bunch of people with AIs.

        It's not inconceivable that a US president could be mentally incompetent enough to follow the advice on some AI on the internet on whether to nuke Russia/China or a hurricane.

    • (Score: 2) by gnuman on Monday April 03 2023, @07:14PM (2 children)

      by gnuman (5013) on Monday April 03 2023, @07:14PM (#1299585)

      back in the 1990s. People went from not having a computer to suddenly needing a computer to access the web.

      In the world, 50% of people have a PC at home. Access to smartphone is higher. And in the 90s, no one thought you would have a tracking device in your pocket most of the time.

      Are we likely to create a paperclip maximizer that we can't turn off? Are we that stupid?
      But then, there are flat earthers, qanon, deep state, 9/11-deniers, moon landing deniers, and meetings that have powerpoint.

      It's a very useful tool actually, if you know what you are doing. Just like a regular paperclip ;) I've tried it -- it's wonderful as a search tool when you have very specific questions to ask, albeit, the questions should be concrete and not of nebulous nature. After all, AI is a glorified regular expression...

      The real question would be, did the Internet result in a lot of people being unemployed? Despite Amazon and the like, the answer is probably no. Somehow we ended up with very low unemployment rate. What this level of AI should result in is improved productivity and not necessarily job replacement. I think we've already proven many times, there is no finite amount of work that needs to be done. There are only a finite amount of resources we can muster to do the work that we dream up.

      • (Score: 3, Touché) by DannyB on Monday April 03 2023, @09:39PM (1 child)

        by DannyB (5839) Subscriber Badge on Monday April 03 2023, @09:39PM (#1299599) Journal

        Historically, improvements in tech that "eliminate jobs" inevitably create more and better jobs. However this is not guaranteed.

        A fable:

        There was this guy who worked mining "clean" coal.
        Then the mine shut down because of the economics of energy that has a greenish color.

        So he went to work at the auto manufacturing plant.
        Soon his job was replaced by robots that cost half as much and were three times as smart but did not want to kill all humans.

        So he decided to become a truck driver, because those trucks ain't goin' to drive themselves.

        --
        What doesn't kill me makes me weaker for next time.
        • (Score: 1) by khallow on Tuesday April 04 2023, @12:11PM

          by khallow (3766) Subscriber Badge on Tuesday April 04 2023, @12:11PM (#1299687) Journal
          The key word here: "fable". Since the probability of making a series of bad guesses is non-zero, I'm sure there's someone that unlucky out there. But how much prosperity should we destroy because bad luck exists?
  • (Score: 3, Interesting) by Mojibake Tengu on Monday April 03 2023, @06:50PM (1 child)

    by Mojibake Tengu (8598) on Monday April 03 2023, @06:50PM (#1299575) Journal

    Let me say it again more explicitly, since my previous irony was deliberately misunderstood. Or suppressed.

    ChatGPT operates as an amplifier of Delusions!

    That's pure demonism, by ancient books criteria. Even the image construction by DALL-E, Midjourney or Stable Diffusion I can count as... summoning.

    Those elite satanists now are just scared of their own undertaking. It's too late, though. The singularity is already behind us.

    I, for one, welcome our new digital magic overlords!

    Here comes the epoch of Magia Nova.

    --
    Respect Authorities. Know your social status. Woke responsibly.
    • (Score: 1, Touché) by Anonymous Coward on Monday April 03 2023, @07:08PM

      by Anonymous Coward on Monday April 03 2023, @07:08PM (#1299584)

      This reads like the Singularity reddit. Every post more breathless than the last! As if written by a seriously deranged chat robot... Nice job mimicking the robot! Reality mirrors art(ificial bullshit).

  • (Score: 4, Insightful) by khallow on Monday April 03 2023, @06:56PM

    by khallow (3766) Subscriber Badge on Monday April 03 2023, @06:56PM (#1299577) Journal
    This strikes me as marketing-based noise. We keep hearing for months about how scary-dangerous AI and the GPT stuff in particular is supposed to be. As I noted earlier, a pause in AI development at this point is an utter waste of time. We haven't a clue what dangers so-called "advanced AI" could cause and a pause at this time won't do a thing to give us that knowledge. What it does do is hype the hell out of the GPT-4 model just like most of the other hysterical stories about the alleged dangers of GPT-4 or AI.
  • (Score: 1) by Runaway1956 on Tuesday April 04 2023, @02:12AM (2 children)

    by Runaway1956 (2926) Subscriber Badge on Tuesday April 04 2023, @02:12AM (#1299632) Journal

    Is the AI apocalypse going to be worse than the zombie apocalypse? If so, how?

    • (Score: 1) by khallow on Tuesday April 04 2023, @02:37AM

      by khallow (3766) Subscriber Badge on Tuesday April 04 2023, @02:37AM (#1299646) Journal
      It'll make AI zombies which are the worst kind of zombie because they're smart zombies.
    • (Score: 3, Insightful) by ikanreed on Tuesday April 04 2023, @03:06AM

      by ikanreed (3164) Subscriber Badge on Tuesday April 04 2023, @03:06AM (#1299650) Journal

      The rich people who own everything will now have unquestioning robotic soldiers to oppress the dispossessed instead of just mostly unquestioning cops.

  • (Score: 3, Touché) by ChrisMaple on Tuesday April 04 2023, @04:40AM

    by ChrisMaple (6964) on Tuesday April 04 2023, @04:40AM (#1299660)

    I think Musk et.al. want competitors to stop developing AI while he and his cronies plunge forward, hoping to use AI to develop itself to a point of unstoppable monopoly.

  • (Score: 2) by tangomargarine on Tuesday April 04 2023, @02:23PM

    by tangomargarine (667) on Tuesday April 04 2023, @02:23PM (#1299715)

    Surprised his name isn't on this yet.

    https://rationalwiki.org/wiki/Eliezer_Yudkowsky [rationalwiki.org]

    --
    "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(1)