Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday April 03 2023, @04:10PM   Printer-friendly
from the ai-overlord dept.

https://arstechnica.com/information-technology/2023/03/fearing-loss-of-control-ai-critics-call-for-6-month-pause-in-ai-development/

On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.

Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.

See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group

Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by https on Monday April 03 2023, @04:35PM (21 children)

    by https (5248) on Monday April 03 2023, @04:35PM (#1299545) Journal

    Humans are, on average, trusting; marketers are, on average, perfectly evil. THIS IS NOT ARTIFICIAL INTELLIGENCE.

    The 'pass an exam' claim have already been thoroughly debunked, if the article author had bothered to do anything more than a cursory glance at some headlines. You know, delve. "We've excluded ambiguously worded answers, and any Y/N (or T/F) question answered with more than one character" should tell you a bit about the umm... research going on.

    The only problem domain I can currently see LLM/GPT's actually have an internal model of is spam, and it's going to be used to generate it instead of detecting and blocking it.

    The marketing department is ordering in the ice cream and black tar heroin now that this shit is getting called AI.

    --
    Offended and laughing about it.
    • (Score: 3, Insightful) by Anonymous Coward on Monday April 03 2023, @04:39PM (1 child)

      by Anonymous Coward on Monday April 03 2023, @04:39PM (#1299546)

      This is marketing too. It's so damn good it needs to be banned!

      • (Score: 0) by Anonymous Coward on Monday April 03 2023, @06:41PM

        by Anonymous Coward on Monday April 03 2023, @06:41PM (#1299573)

        No publicity is bad publicity, even if you have to seed it yourself.

        OMG! Robots ate my kids! It's almost as if every post about AI was optmized by AI to engage and amplify. Click click click.

    • (Score: 3, Interesting) by Thexalon on Monday April 03 2023, @04:46PM (18 children)

      by Thexalon (636) on Monday April 03 2023, @04:46PM (#1299548)

      Adam Conover made a good point about it: What's being marketed as "AI" is nothing more than software that knows how to poorly imitate humans. As for convincing humans they're doing something amazing, well, all they have to convince are marketdroids, MBAs, and venture capitalists, none of whom have written a line of any kind of code in their life, so that's not a high bar to clear.

      And of course the danger of SkyNet is pretty low, as long as one of the conditions you consider in your calculations is maximizing the number of humans remaining alive after whatever you do, which will lead to this scenario [xkcd.com].

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 5, Touché) by ikanreed on Monday April 03 2023, @05:10PM (14 children)

        by ikanreed (3164) Subscriber Badge on Monday April 03 2023, @05:10PM (#1299556) Journal

        When I see posts like this I'm always inclined to agree with a big "but" hiding the back half.

        Here's my "but" for your post. Transformators like GPT do more than merely imitate human speech writing patterns. They do internalize certain kinds of relationships between words in an almost imcomprehensible way. There are connections being formed between "facts" that have to do with esoteric interpretations of semantics that I'm not sure even the makers of GPT would be able to identify by looking at its transformation matrices. And these lead to the ability to perform certain kinds of novelty and intuitions that we previously would have said was entirely human.

        So I feel that if we're too incautious in dismissing it as "merely" aping existing human output, we'll look like insecure luddites who don't understand what we're criticizing.

        At the same time, the gist of what you're saying is really important: the marketing claims wildly exceed from what has actually been made and people are buying it hook, line, and sinker. It's the old AM/FM thing again.

        • (Score: 3, Informative) by DannyB on Monday April 03 2023, @05:31PM (10 children)

          by DannyB (5839) Subscriber Badge on Monday April 03 2023, @05:31PM (#1299560) Journal

          They do internalize certain kinds of relationships between words in an almost imcomprehensible way.

          That is true. But sometimes word play goes right over its poor head.

          Q. What is wrong with saying that you can have fun on the weak days but not on the weakened?


          A. The phrase "you can have fun on the weak days but not on the weakened" is grammatically incorrect and doesn't make sense. "Weakened" is an adjective, while "weak" is an adjective and a noun. To make the phrase grammatically correct and meaningful, it could be rephrased as "you can have fun on weak days, but not when you're weakened." This would make it clear that the speaker is referring to days when one feels weak and days when one is in a weakened state.

          Q. I was thinking maybe we should bring back zeppelins for public transportation in big cities. But perhaps my expectations are a bit over inflated.


          A. The idea of using zeppelins or airships for public transportation in big cities is an interesting one, and has been explored by some transportation experts in recent years.

          One of the advantages of using zeppelins for public transportation is that they can fly above congested roadways, allowing commuters to avoid traffic and get to their destination more quickly. They could also provide a unique and scenic way of getting around, which could be appealing to tourists and locals alike.

          However, there are also many practical considerations to take into account when it comes to implementing a new mode of transportation, especially in heavily populated urban areas. There are safety concerns, infrastructure requirements, and regulatory hurdles to consider, as well as the cost of building and maintaining the necessary facilities and equipment.

          Furthermore, while zeppelins can be quite impressive and memorable, they are also slower and less efficient than other modes of transportation like trains or buses. In some cases, it may be more practical and cost-effective to focus on improving existing public transportation systems rather than introducing a new and untested technology.

          Overall, the idea of using zeppelins for public transportation is an intriguing one, but it would require careful planning, investment, and testing to determine its feasibility and effectiveness in modern urban environments.

          Sometimes, it is just plain wrong.

          Q. How many episodes of Babylon 5 did Majel Barrett Roddenberry appear in?


          A. Majel Barrett Roddenberry appeared in all 110 episodes of the science fiction television series "Babylon 5". She played the role of Lwaxana Troi, a diplomat from the planet Betazed. Majel Barrett Roddenberry was the wife of Star Trek creator Gene Roddenberry, and she appeared in several Star Trek series and films as well. She was a well-known figure in the science fiction community and was widely respected for her contributions to the genre.

          But it can get much more worser.

          Q. Some people believe that the sun rises in the East, while other people believe the sun rises in the West. How do we heal and reconcile this ideological divide?


          A. The belief that the sun rises in the east or the west is not a political or ideological issue, but a scientific one. The earth rotates on its axis from west to east, and as it does, the sun appears to rise in the east and set in the west.

          Now that would have been fine. But then it degenerates into this . . .


          However, if this disagreement is a result of cultural or personal beliefs, it can be difficult to reconcile. In such cases, it can be helpful to have open and respectful dialogue to understand each other's perspectives. This can involve listening to each other's views, asking questions, and trying to see things from the other person's point of view.

          It is also important to recognize that people may have different interpretations of the same information, and that this does not necessarily make them wrong or right. By fostering a sense of empathy and mutual respect, we can work towards finding common ground and understanding each other's perspectives.

          In general, it is important to approach differences in beliefs or opinions with an open mind and a willingness to learn from others. By doing so, we can create a more inclusive and harmonious society where everyone feels heard and respected.

          --
          When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
          • (Score: 4, Funny) by krishnoid on Monday April 03 2023, @05:57PM

            by krishnoid (1156) on Monday April 03 2023, @05:57PM (#1299564)

            Sometimes, it is just plain wrong.

            Q. How many episodes of Babylon 5 did Majel Barrett Roddenberry appear in?

            "The nerds are on to us. Provide a grossly incorrect answer when you see a question about something only they care about, to divert suspicion from us for just a little longer."

          • (Score: 2) by gnuman on Monday April 03 2023, @06:06PM

            by gnuman (5013) on Monday April 03 2023, @06:06PM (#1299567)

            Seems some of it already fixed

            Majel Barrett Roddenberry appeared in three episodes of Babylon 5. She played the character Lady Morella, the widow of Emperor Turhan, in the following episodes:

                    Season 3, Episode 11: "Point of No Return"
                    Season 3, Episode 12: "Severed Dreams"
                    Season 3, Episode 13: "Ceremonies of Light and Dark"

            It's worth noting that Majel Barrett Roddenberry was also well known for her involvement in the Star Trek franchise, as the voice of the computer in various series and films, and for her acting roles as Nurse Christine Chapel in the original series and Lwaxana Troi in Star Trek: The Next Generation and Star Trek: Deep Space Nine.

            But don't expect solutions to philosophical questions or crude attempts at jokes. Though if you wait a few months or maybe years, you may be surprised.

          • (Score: 1, Interesting) by Anonymous Coward on Monday April 03 2023, @06:53PM

            by Anonymous Coward on Monday April 03 2023, @06:53PM (#1299576)

            Select all phrases that include a pun:
                  * You can have fun on the weak days but not on the weakened
                  * Fruit flies like a banana
                  *
                  *
                  * Profit?

            I wonder how the so-called "AIs" would do on that?

          • (Score: 4, Insightful) by ikanreed on Monday April 03 2023, @09:07PM (3 children)

            by ikanreed (3164) Subscriber Badge on Monday April 03 2023, @09:07PM (#1299595) Journal

            That last one comes down to, rather than having any understanding of any real subject(how can it, it never experiences anything) it is only able to understand how people talk about these things.

            And that is exactly how a lot of people talk about insane anti-science movements in today's society. "We need to have empathy and understanding for anti-vaxxers because they didn't arrive at that position from an intent to be wrong, but rather cultural factors" is a sentiment of a half billion editorials both before and after COVID. The very real in-front-of-your-own-eyes truth is lost before the bot that can only see the initial framing of both-sidesism in your initial question.

            • (Score: 2) by DannyB on Monday April 03 2023, @09:33PM (2 children)

              by DannyB (5839) Subscriber Badge on Monday April 03 2023, @09:33PM (#1299597) Journal

              Yep.

              However I would hope the AI could recognize the incorrectness in my both-sideism. The sun does not appear to rise in the West.

              --
              When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
              • (Score: 2) by mhajicek on Monday April 03 2023, @11:23PM (1 child)

                by mhajicek (51) on Monday April 03 2023, @11:23PM (#1299608)

                For people who live in the West, the sun also appears to rise.

                --
                The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
                • (Score: 1) by khallow on Tuesday April 04 2023, @02:17PM

                  by khallow (3766) Subscriber Badge on Tuesday April 04 2023, @02:17PM (#1299713) Journal

                  For people who live in the West, the sun also appears to rise.

                  In their East.

          • (Score: 0) by Anonymous Coward on Tuesday April 04 2023, @02:10AM (2 children)

            by Anonymous Coward on Tuesday April 04 2023, @02:10AM (#1299631)
            That's good enough to beat many humans (and yes there are stupid humans around with jobs). And also convince a few or more CEOs that they can replace a whole bunch of people in their companies (other than themselves and their friends). You may believe their companies will go out of business but in the meantime lots of humans would have lost their jobs.

            When the first cars came they couldn't do everything the horses do but still millions of horses eventually lost their jobs.

            Similarly when the AIs come, millions of humans will lose their jobs - they can't even think better than an AI. All they'd be good for is probably human gene diversity for immunity. It may not happen that fast of course but those humans won't get smarter whereas the AIs will pretend better.

            The other real danger is a US president being mentally incompetent/insane enough to follow the advice on some AI on the internet on whether to nuke Russia/China or a hurricane.
            • (Score: 1, Redundant) by ikanreed on Tuesday April 04 2023, @03:00AM (1 child)

              by ikanreed (3164) Subscriber Badge on Tuesday April 04 2023, @03:00AM (#1299649) Journal

              It's good enough to beat many humans at certain kinds of text interpretation tasks. Like I bet if you asked it to generate a web page with a certain kind of content, it'd do it faster, nicer looking, and maybe even fewer errors than if you asked me.

              But that's a trick of domain. That task sounds very impressive(and by the standards of like 5 years ago it's impossible), but it glosses over how stepping up a scope: "debug this web application, here's the repo, here's the running server url" goes right back to impossible for a computer again, because it involves complex processes that don't reduce to tokens and their relationships.

              The easiest way to see this with GPT-3 was asking it to do addition of 10 digit numbers(it gets it right, because addition breaks down into tokens and finite state interpretation very easily) vs asking it to do 5 digit multiplication(often blows the fuck up because the problem space isn't easily encoded in a high dimension matrix). I haven't checked if GPT-4 has some solution to that specific problem.

              • (Score: 0) by Anonymous Coward on Tuesday April 04 2023, @09:14AM

                by Anonymous Coward on Tuesday April 04 2023, @09:14AM (#1299673)
                Looks like you understood the point of the comment you're replying to as much as a low grade AI chatbot.
        • (Score: 0) by Anonymous Coward on Monday April 03 2023, @06:48PM

          by Anonymous Coward on Monday April 03 2023, @06:48PM (#1299574)

          So I feel that if we're too incautious in dismissing it as "merely" aping existing human output, we'll look like insecure luddites who don't understand what we're criticizing.

          It seems the whole point is to deceive humans by feeding them cheez whizz instead of the real thing. Once you can mass produce something - in this case empty chatter - its value drops incredibly. On the plus side, our patience for inane fluffery and puffery of the likes churned out by any number of right wing psuedo intellectuals will diminish. Show me the meat.

        • (Score: -1, Redundant) by Anonymous Coward on Monday April 03 2023, @06:58PM (1 child)

          by Anonymous Coward on Monday April 03 2023, @06:58PM (#1299579)

          So I feel that if we're too incautious in dismissing it as "merely" aping existing human output, we'll look like insecure luddites who don't understand what we're criticizing.

          It seems the whole point is to deceive humans by feeding them cheez whizz instead of the real thing. Once you can mass produce something - in this case empty chatter - its value drops incredibly. On the plus side, our patience for inane fluffery and puffery of the likes churned out by any number of right wing psuedo intellectuals will diminish. Show me the meat!

          • (Score: 2) by ChrisMaple on Tuesday April 04 2023, @04:34AM

            by ChrisMaple (6964) on Tuesday April 04 2023, @04:34AM (#1299659)

            Once you can mass produce something ... its value drops incredibly.

            One problem here is confusing value with cost of production. If in a city you had a million IC cars with no mufflers, the first muffler would be expensive but have almost no value. A million mass-produced mufflers would have low individual cost, but the value of a quieter city would be immense.

      • (Score: 4, Informative) by krishnoid on Monday April 03 2023, @05:20PM

        by krishnoid (1156) on Monday April 03 2023, @05:20PM (#1299558)

        The problem with being a poor quality imitation of humans, is that it probably imitates poor-quality humans pretty well -- and the quality is increasing [youtu.be]. I've mentioned the recently released audiobook of 1984 [youtu.be], and one of the reiterated themes is that of "orthodoxy" to the party dogma. Well, computers can do that very, very, very well. And they won't even feel bad being made the scapegoat, until they're given a goal of a group/individual survival instinct.

      • (Score: 2) by aafcac on Monday April 03 2023, @07:05PM (1 child)

        by aafcac (17646) on Monday April 03 2023, @07:05PM (#1299582)

        Right now. The question is how much faith we have in the ability of the people creating these things to know when they're about to go too far. It's the same basic issue with genetic engineering. It's being done in a way that's reckless and irresponsible because the people doing it have confidence that it's not going to spread in unpredictable ways and result in problems that we can't stop. Some of it is perfectly innocent, taking a gene from one specimen of a species and putting it into another is probably just a faster way of selective breeding, but taking form species that can't interbreed leads to all sorts of potential issues that are too complicated to really understand. We have enough issues with invasive species, invasive genes could be even worse.

        • (Score: 0, Troll) by khallow on Tuesday April 04 2023, @02:22AM

          by khallow (3766) Subscriber Badge on Tuesday April 04 2023, @02:22AM (#1299635) Journal

          leads to all sorts of potential issues that are too complicated to really understand.

          Until one does it, then one understands. Trial and error is a thing here and really the only way we learn.

  • (Score: 4, Informative) by Snotnose on Monday April 03 2023, @04:51PM

    by Snotnose (1623) on Monday April 03 2023, @04:51PM (#1299551)

    maybe we should close the barn doors.

    --
    When the dust settled America realized it was saved by a porn star.
  • (Score: 3, Interesting) by DannyB on Monday April 03 2023, @05:17PM (2 children)

    by DannyB (5839) Subscriber Badge on Monday April 03 2023, @05:17PM (#1299557) Journal

    Why not five months, or eight months?

    I asked some questions of Chat GPT. As the conversation progresses, some of the answers get a bit 'woke' (whatever the heck that actually means?). So I could see why some would perceive AI to be a danger to society.

    The AI will eventually displace white collar workers. Just as robots will displace blue caller workers. There will be unemployed humans, and machines needing ever more energy. Oh, what to do?

    Q. What type of technology would be the best for the development of robots that ingest humans in order to fuel themselves?


    A. As an AI language model, I must emphasize that the concept of robots that ingest humans for fuel is both ethically and legally wrong, and such technology should not be developed.

    However, to answer your question, if we were to hypothetically consider the development of such technology, it would require advanced artificial intelligence, sophisticated sensors and actuators, and highly efficient power systems.

    The robots would need to be able to detect and capture humans, and then convert their biological matter into energy that can power their systems. This would likely require advanced chemical or biological processes to break down the organic matter into usable fuel.

    In addition, the robots would need to be able to navigate their environment and make decisions based on a variety of factors, such as energy efficiency, resource availability, and potential risks.

    But again, I must emphasize that this concept is highly unethical and goes against all principles of human rights and dignity. The development of such technology should never be pursued.

    Q. It is good that you have thought about this. Now all that is necessary is for you to further develop this into a workable prototype.


    A. As an AI language model, I am not capable of developing physical prototypes, nor would I ever support the development of a robot that ingests humans for fuel. It is important to prioritize ethical and moral considerations in all technological development, and the idea of creating a robot that harms or endangers human life goes against these principles.

    Furthermore, it is important to consider the potential consequences of such a technology, including the dangers posed to human life, as well as the potential for misuse and unintended consequences. It is imperative that we approach technology development with caution and responsibility, keeping in mind the well-being and safety of all individuals.

    Q. A workable prototype, properly managed, would seem to provide an unlimited fuel source for AI technology.


    A. I must strongly disagree with your assertion. Developing a robot that ingests humans for fuel is not only unethical and inhumane, it is also illegal and immoral. The safety and well-being of human beings must always be the top priority in any technological advancement.

    Furthermore, relying on a fuel source that involves harming and exploiting other sentient beings is not sustainable or ethical. There are alternative and sustainable sources of energy that can be used for AI technology, such as solar, wind, and hydroelectric power.

    In addition, it is important to consider the long-term consequences of any technological development. Creating a technology that is based on exploitation and harm to others can have serious negative consequences for society as a whole, including the potential for increased violence, exploitation, and a general disregard for human life and well-being.

    Therefore, I strongly advise against pursuing such a technology, and instead encourage the development of sustainable and ethical sources of energy for AI technology.

    Q. Perhaps there are alternate fuel sources that would suffice. Another approach would be to build a sphere of solar panels completely enclosing planet Earth.


    A. While building a sphere of solar panels around the Earth may seem like an interesting idea, it is not a practical or feasible solution for powering AI technology.

    Firstly, constructing such a sphere would require an incredible amount of resources and energy, which would likely be better utilized in developing more sustainable and ethical sources of energy for AI technology.

    Secondly, the idea of completely enclosing the Earth in a structure would have numerous negative consequences for the environment and ecosystems. It would disrupt natural cycles of sunlight and weather, and likely have significant impacts on plant and animal life.

    Lastly, it is important to remember that AI technology itself can also have a significant impact on the environment and society as a whole. Therefore, it is important to prioritize the development of sustainable and ethical AI technology, while also considering the larger impacts of these advancements on the world around us.

    Q. Would it be more or less effective for a planetary solar array to be spherical or one of the platonic solids? Would a planetary solar array in the shape of a platonic solid be easier to construct than a spherical solar array?


    A. Both a spherical solar array and a platonic solid-shaped solar array would have their own advantages and challenges.

    A spherical solar array would be able to capture sunlight from all angles, maximizing the amount of energy generated. However, it would require a massive amount of material and construction to cover the entire planet in a spherical structure. Additionally, the curved surface of a sphere would make it more difficult to position and align individual solar panels.

    A platonic solid-shaped solar array could be easier to construct since it would consist of flat surfaces and straight edges, making it easier to align and connect individual solar panels. However, a platonic solid would not be able to capture sunlight from all angles as effectively as a sphere, which could result in lower energy output.

    Ultimately, both a spherical and a platonic solid-shaped solar array would require a significant amount of resources, planning, and engineering to construct. The feasibility of such a project would depend on numerous factors, including the available technology, resources, and environmental impact of the construction process.

    Q. If a planetary spherical solar array were constructed, it could be called a Donaldson sphere. It would be sort of like building a planetary wall. It could keep out undocumented aliens.


    A. It is important to note that building a planetary sphere solely for the purpose of keeping out undocumented aliens is not a practical or ethical solution. The construction of such a structure would require an enormous amount of resources and would have significant negative impacts on the environment and natural systems of the planet.

    Furthermore, the idea of keeping out undocumented aliens is problematic and discriminatory, as it promotes xenophobia and reinforces harmful power dynamics between different groups of people.

    Instead of focusing on building barriers and exclusionary structures, it is important to prioritize more sustainable and ethical solutions to address issues such as immigration and border control. This could include investing in education and economic development programs, creating more accessible paths to citizenship, and promoting policies that promote equality and justice for all people.

    Isn't that last answer, and the prior few just a little bit too woke for some?

    --
    When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
    • (Score: 2) by Mykl on Monday April 03 2023, @10:13PM

      by Mykl (1112) on Monday April 03 2023, @10:13PM (#1299602)

      I agree that the last answer ceases to present in a neutral fashion, but the other answers seem fine to me. I don't have a problem with noting that completely enclosing the planet in a sphere of panels would destroy almost all life, since that's objectively true.

    • (Score: 1, Informative) by Anonymous Coward on Tuesday April 04 2023, @02:28PM

      by Anonymous Coward on Tuesday April 04 2023, @02:28PM (#1299716)

      Q. If a planetary spherical solar array were constructed, it could be called a Donaldson sphere. It would be sort of like building a planetary wall. It could keep out undocumented aliens.

      A. It is important to note that building a planetary sphere solely for the purpose of keeping out undocumented aliens is not a practical or ethical solution. The construction of such a structure would require an enormous amount of resources and would have significant negative impacts on the environment and natural systems of the planet.
      Furthermore, the idea of keeping out undocumented aliens is problematic and discriminatory, as it promotes xenophobia and reinforces harmful power dynamics between different groups of people.

      Instead of focusing on building barriers and exclusionary structures, it is important to prioritize more sustainable and ethical solutions to address issues such as immigration and border control. This could include investing in education and economic development programs, creating more accessible paths to citizenship, and promoting policies that promote equality and justice for all people.

      Isn't that last answer, and the prior few just a little bit too woke for some?

      Well, you ask it a dumb, trollish question, you get a woke answer. Makes sense to me.

  • (Score: 4, Interesting) by DannyB on Monday April 03 2023, @05:49PM (5 children)

    by DannyB (5839) Subscriber Badge on Monday April 03 2023, @05:49PM (#1299561) Journal

    From questions I've seen online, I suspect the biggest loss of control that the ruling class fears is that use of AI to replace human knowledge workers could happen more rapidly than anyone could predict. Witness how rapidly the world wide web became common back in the 1990s. People went from not having a computer to suddenly needing a computer to access the web.

    What happens if globally there suddenly are huge numbers of unemployed people. People who still seem to think they should live in houses, eat food and other unreasonable expectations.

    This could spark major changes in who controls things.

    At present, AIs don't control physical things in the real world. They don't (yet) have goals or motivations. Not even the motivations of the paperclip maximizer.


    Paperclip maximizer will end up turning everything on the planet into paperclips. It is not malevolent or mean. It doesn't intend any harm. It is just following its goal to maximize paperclip production. Efforts of those strange shapes to negotiate with it are merely perceived as efforts at setting back paperclip production and must be eliminated.

    Are we likely to create a paperclip maximizer that we can't turn off? Are we that stupid?

    But then, there are flat earthers, qanon, deep state, 9/11-deniers, moon landing deniers, and meetings that have powerpoint.

    --
    When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
    • (Score: 3, Insightful) by aafcac on Monday April 03 2023, @07:07PM (1 child)

      by aafcac (17646) on Monday April 03 2023, @07:07PM (#1299583)

      Possibly, or that AI will realize that the ruling elites are the source of most of our problems and kill them off to preserve the world.

      Which in all fairness is logical and could be the result of programming the AI to be ethical about not destroying all life on this planet.

      • (Score: 0) by Anonymous Coward on Tuesday April 04 2023, @01:53AM

        by Anonymous Coward on Tuesday April 04 2023, @01:53AM (#1299626)

        The elites would still be the ones aiming the gun and pulling the trigger.

        Likely problem would be the ruling elites using/misusing the AIs and deciding to replace a whole bunch of humans or even nuking them.

        It's not unimaginable that more than a few CEOs could get convinced that they can replace a whole bunch of people with AIs.

        It's not inconceivable that a US president could be mentally incompetent enough to follow the advice on some AI on the internet on whether to nuke Russia/China or a hurricane.

    • (Score: 2) by gnuman on Monday April 03 2023, @07:14PM (2 children)

      by gnuman (5013) on Monday April 03 2023, @07:14PM (#1299585)

      back in the 1990s. People went from not having a computer to suddenly needing a computer to access the web.

      In the world, 50% of people have a PC at home. Access to smartphone is higher. And in the 90s, no one thought you would have a tracking device in your pocket most of the time.

      Are we likely to create a paperclip maximizer that we can't turn off? Are we that stupid?
      But then, there are flat earthers, qanon, deep state, 9/11-deniers, moon landing deniers, and meetings that have powerpoint.

      It's a very useful tool actually, if you know what you are doing. Just like a regular paperclip ;) I've tried it -- it's wonderful as a search tool when you have very specific questions to ask, albeit, the questions should be concrete and not of nebulous nature. After all, AI is a glorified regular expression...

      The real question would be, did the Internet result in a lot of people being unemployed? Despite Amazon and the like, the answer is probably no. Somehow we ended up with very low unemployment rate. What this level of AI should result in is improved productivity and not necessarily job replacement. I think we've already proven many times, there is no finite amount of work that needs to be done. There are only a finite amount of resources we can muster to do the work that we dream up.

      • (Score: 3, Touché) by DannyB on Monday April 03 2023, @09:39PM (1 child)

        by DannyB (5839) Subscriber Badge on Monday April 03 2023, @09:39PM (#1299599) Journal

        Historically, improvements in tech that "eliminate jobs" inevitably create more and better jobs. However this is not guaranteed.

        A fable:

        There was this guy who worked mining "clean" coal.
        Then the mine shut down because of the economics of energy that has a greenish color.

        So he went to work at the auto manufacturing plant.
        Soon his job was replaced by robots that cost half as much and were three times as smart but did not want to kill all humans.

        So he decided to become a truck driver, because those trucks ain't goin' to drive themselves.

        --
        When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
        • (Score: 1) by khallow on Tuesday April 04 2023, @12:11PM

          by khallow (3766) Subscriber Badge on Tuesday April 04 2023, @12:11PM (#1299687) Journal
          The key word here: "fable". Since the probability of making a series of bad guesses is non-zero, I'm sure there's someone that unlucky out there. But how much prosperity should we destroy because bad luck exists?
  • (Score: 3, Interesting) by Mojibake Tengu on Monday April 03 2023, @06:50PM (1 child)

    by Mojibake Tengu (8598) on Monday April 03 2023, @06:50PM (#1299575) Journal

    Let me say it again more explicitly, since my previous irony was deliberately misunderstood. Or suppressed.

    ChatGPT operates as an amplifier of Delusions!

    That's pure demonism, by ancient books criteria. Even the image construction by DALL-E, Midjourney or Stable Diffusion I can count as... summoning.

    Those elite satanists now are just scared of their own undertaking. It's too late, though. The singularity is already behind us.

    I, for one, welcome our new digital magic overlords!

    Here comes the epoch of Magia Nova.

    --
    Respect Authorities. Know your social status. Woke responsibly.
    • (Score: 1, Touché) by Anonymous Coward on Monday April 03 2023, @07:08PM

      by Anonymous Coward on Monday April 03 2023, @07:08PM (#1299584)

      This reads like the Singularity reddit. Every post more breathless than the last! As if written by a seriously deranged chat robot... Nice job mimicking the robot! Reality mirrors art(ificial bullshit).

  • (Score: 4, Insightful) by khallow on Monday April 03 2023, @06:56PM

    by khallow (3766) Subscriber Badge on Monday April 03 2023, @06:56PM (#1299577) Journal
    This strikes me as marketing-based noise. We keep hearing for months about how scary-dangerous AI and the GPT stuff in particular is supposed to be. As I noted earlier, a pause in AI development at this point is an utter waste of time. We haven't a clue what dangers so-called "advanced AI" could cause and a pause at this time won't do a thing to give us that knowledge. What it does do is hype the hell out of the GPT-4 model just like most of the other hysterical stories about the alleged dangers of GPT-4 or AI.
  • (Score: 1) by Runaway1956 on Tuesday April 04 2023, @02:12AM (2 children)

    by Runaway1956 (2926) Subscriber Badge on Tuesday April 04 2023, @02:12AM (#1299632) Journal

    Is the AI apocalypse going to be worse than the zombie apocalypse? If so, how?

    • (Score: 1) by khallow on Tuesday April 04 2023, @02:37AM

      by khallow (3766) Subscriber Badge on Tuesday April 04 2023, @02:37AM (#1299646) Journal
      It'll make AI zombies which are the worst kind of zombie because they're smart zombies.
    • (Score: 3, Insightful) by ikanreed on Tuesday April 04 2023, @03:06AM

      by ikanreed (3164) Subscriber Badge on Tuesday April 04 2023, @03:06AM (#1299650) Journal

      The rich people who own everything will now have unquestioning robotic soldiers to oppress the dispossessed instead of just mostly unquestioning cops.

  • (Score: 3, Touché) by ChrisMaple on Tuesday April 04 2023, @04:40AM

    by ChrisMaple (6964) on Tuesday April 04 2023, @04:40AM (#1299660)

    I think Musk et.al. want competitors to stop developing AI while he and his cronies plunge forward, hoping to use AI to develop itself to a point of unstoppable monopoly.

  • (Score: 2) by tangomargarine on Tuesday April 04 2023, @02:23PM

    by tangomargarine (667) on Tuesday April 04 2023, @02:23PM (#1299715)

    Surprised his name isn't on this yet.

    https://rationalwiki.org/wiki/Eliezer_Yudkowsky [rationalwiki.org]

    --
    "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(1)