Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday November 26, @04:29PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Almost a week has passed since the OpenAI board fired CEO Sam Altman without explaining its actions. By Tuesday, the board reinstated Altman and appointed a new board to oversee the OpenAI operations. An investigation into what happened was also promised, something I believe all ChatGPT users deserve. We’re talking about a company developing an incredibly exciting resource, AI. But also one that could eradicate humanity. Or so some people fear.

Theories were running rampant in the short period between Altman’s ouster and return, with some speculating that OpenAI has developed an incredibly strong GPT-5 model. Or that OpenAI had reached AGI, artificial general intelligence that could operate just as good as humans. That the board was simply doing its job, protecting the world against the irresponsible development of AI.

It turns out the guesses and memes weren’t too far off. We’re not on the verge of dealing with dangerous AI, but a new report says that OpenAI delivered a massive breakthrough in the days preceding Altman’s firing.

The new algorithm (Q* or Q-Star) could threaten humanity, according to a letter unnamed OpenAI researchers sent to the board. The letter and the Q-Star algorithm might have been key developments that led to the firing of Altman.

Reuters, which has not seen the letter, the document was one factor. There’s apparently a longer list of grievances that convinced the board to fire Altman. The board worried about the company’s fast pace of commercializing ChatGPT advances before understanding the consequences.

OpenAI declined to comment to Reuters, but the company acknowledged project Q-Star in a message to staffers and the letter to the board. Mira Murati, who was the first interim CEO the board appointment after letting Altman go, apparently alerted the staff on the Q-Star news that was about to break.

It’s too early to tell whether Q-Star is AGI, and OpenAI was busy with the CEO drama rather than making public announcements. And the company might not want to announce such innovation anytime soon, especially if caution is needed.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Once math is conquered, AI will have greater reasoning capabilities resembling human intelligence. After that, AI could work on novel scientific research.

The letter flagged the potential danger of Q-Star, although it’s unclear what the safety concerns are. Generally speaking, researchers worry that the dawn of AI might also lead to the demise of the human species.

As more intelligent AI comes along, it might decide that destroying the human species might better serve its interests. It sounds like the premise of Terminator and Matrix, but it’s one fear AI researchers have.

It’s unclear whether Q-Star is the key development that could lead there. But we’ll only know after the fact.

Researchers have also flagged work by an “AI scientist” team, the existence of which multiple sources confirmed. The group, formed by combining earlier “Code Gen” and “Math Gen” teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

While OpenAI is yet to confirm this rumored innovation, Sam Altman did tease a big breakthrough in the days preceding his ouster.


Original Submission

Related Stories

Exploring the Emergence of Technoauthoritarianism 7 comments

The theoretical promise of AI is as hopeful as the promise of social media once was, and as dazzling as its most partisan architects project. AI really could cure numerous diseases. It really could transform scholarship and unearth lost knowledge. Except that Silicon Valley, under the sway of its worst technocratic impulses, is following the playbook established in the mass scaling and monopolization of the social web:

Facebook (now Meta) has become an avatar of all that is wrong with Silicon Valley. Its self-interested role in spreading global disinformation is an ongoing crisis. Recall, too, the company’s secret mood-manipulation experiment in 2012, which deliberately tinkered with what users saw in their News Feed in order to measure how Facebook could influence people’s emotional states without their knowledge. Or its participation in inciting genocide in Myanmar in 2017. Or its use as a clubhouse for planning and executing the January 6, 2021, insurrection. (In Facebook’s early days, Zuckerberg listed “revolutions” among his interests. This was around the time that he had a business card printed with I’M CEO, BITCH.)

And yet, to a remarkable degree, Facebook’s way of doing business remains the norm for the tech industry as a whole, even as other social platforms (TikTok) and technological developments (artificial intelligence) eclipse Facebook in cultural relevance.

The new technocrats claim to embrace Enlightenment values, but in fact they are leading an antidemocratic, illiberal movement.

[...] The Shakespearean drama that unfolded late last year at OpenAI underscores the extent to which the worst of Facebook’s “move fast and break things” mentality has been internalized and celebrated in Silicon Valley. OpenAI was founded, in 2015, as a nonprofit dedicated to bringing artificial general intelligence into the world in a way that would serve the public good. Underlying its formation was the belief that the technology was too powerful and too dangerous to be developed with commercial motives alone.

Related:


Original Submission

OpenAI CEO Altman Wasn't Fired Because of Scary New Tech, Just Internal Politics 7 comments

https://arstechnica.com/information-technology/2024/03/openai-ceo-sam-altmans-conduct-did-not-mandate-removal-says-independent-review/

On Friday afternoon Pacific Time, OpenAI announced the appointment of three new members to the company's board of directors and released the results of an independent review of the events surrounding CEO Sam Altman's surprise firing last November. The current board expressed its confidence in the leadership of Altman and President Greg Brockman, and Altman is rejoining the board.
[...]
The independent review, conducted by law firm WilmerHale, investigated the circumstances that led to Altman's abrupt removal from the board and his termination as CEO on November 17, 2023. Despite rumors to the contrary, the board did not fire Altman because they got a peek at scary new AI technology and flinched. "WilmerHale... found that the prior Board's decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI's finances, or its statements to investors, customers, or business partners."

Instead, the review determined that the prior board's actions stemmed from a breakdown in trust between the board and Altman.
[...]
Altman's surprise firing occurred after he attempted to remove Helen Toner from OpenAI's board due to disagreements over her criticism of OpenAI's approach to AI safety and hype. Some board members saw his actions as deceptive and manipulative. After Altman returned to OpenAI, Toner resigned from the OpenAI board on November 29.

In a statement posted on X, Altman wrote, "i learned a lot from this experience. one think [sic] i'll say now: when i believed a former board member was harming openai through some of their actions, i should have handled that situation with more grace and care. i apologize for this, and i wish i had done it differently."
[...]
After OpenAI's announcements on Friday, resigned OpenAI board members Toner and Tasha McCauley released a joint statement on X. "Accountability is important in any company, but it is paramount when building a technology as potentially world-changing as AGI," they wrote. "We hope the new board does its job in governing OpenAI and holding it accountable to the mission. As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable."

Previously on SoylentNews:
Sam Altman Officially Back as OpenAI CEO: "We Didn't Lose a Single Employee" - 20231202
AI Breakthrough That Could Threaten Humanity Might Have Been Key To Sam Altman's Firing - 20231124
OpenAI CEO Sam Altman Purged, President Brockman Quits, but Maybe They'll All Come Back After All - 20231119


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by Rosco P. Coltrane on Sunday November 26, @04:48PM (40 children)

    by Rosco P. Coltrane (4757) on Sunday November 26, @04:48PM (#1334235)

    OpenAI is on the verge of humanity-changing discovering... Massive drama seemingly occurring in carefully timed, highly mediatized installments...

    You know who did the same thing in years past? Deam Kaman. The Segway (which was just mysteriously called "It" at the time) was supposed to be the most important, Earth-shattering, society-transformative invention in the whole of human history. And then the Segway came out, and it turned out to be an unstable mobility scooter for overweight police officers and for dumb US presidents to take a tumble on and look even dumber.

    This whole OpenAI feels furiously like the "It" of 2023. Show me proof at some point because I don't really buy it.

    • (Score: 4, Informative) by RamiK on Sunday November 26, @05:12PM

      by RamiK (1813) on Sunday November 26, @05:12PM (#1334239)

      ChatGPT and co. bots are commonly used in code reviews for github pull requests. They even suggest backports in nixos: https://github.com/NixOS/nixpkgs/pull/270151 [github.com]

      --
      compiling...
    • (Score: 3, Touché) by Tork on Sunday November 26, @05:18PM (28 children)

      by Tork (3914) Subscriber Badge on Sunday November 26, @05:18PM (#1334242)
      You've seen productive AI, you haven't seen segways purchased by zillions of people.
      --
      🏳️‍🌈 Proud Ally 🏳️‍🌈
      • (Score: 5, Insightful) by Anonymous Coward on Sunday November 26, @05:35PM (3 children)

        by Anonymous Coward on Sunday November 26, @05:35PM (#1334248)

        I've yet to see any AI. LLMs are not AI.

        • (Score: 3, Insightful) by anubi on Sunday November 26, @11:39PM (1 child)

          by anubi (2828) on Sunday November 26, @11:39PM (#1334301) Journal

          I just saw this...

          https://bgr.com/tech/i-want-the-iphone-15-pros-action-button-solely-for-chatgpt-voice/ [bgr.com]

          Someone asked AI for a weather report.

          For all it's supposed intelligence, I found it's reply not very informative, yet amusing, in a corporate way.

          It sure reminded me of working under highly paid executive types in a publicly owned corporation, highly paid for their credentials, yet seemingly knowing nothing work related, other than budgets.

          --
          "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
          • (Score: 0) by Anonymous Coward on Monday November 27, @04:26PM

            by Anonymous Coward on Monday November 27, @04:26PM (#1334365)

            Budgets and chains of command and the names of their boss's children.

        • (Score: 3, Funny) by Anonymous Coward on Monday November 27, @01:06PM

          by Anonymous Coward on Monday November 27, @01:06PM (#1334335)

          But they suspect this one can do MATH. Know who else do math? The CHINESE. We're all doomed.

      • (Score: 5, Insightful) by Rosco P. Coltrane on Sunday November 26, @05:39PM (3 children)

        by Rosco P. Coltrane (4757) on Sunday November 26, @05:39PM (#1334249)

        You've seen productive AI

        Not really...

        • (Score: 5, Insightful) by Tork on Sunday November 26, @07:11PM (2 children)

          by Tork (3914) Subscriber Badge on Sunday November 26, @07:11PM (#1334268)
          Just today I saw that Amazon is using AI to summarize product reviews. At work we use a Google service to chat internally. Now if you get behind in a conversation a lil window will appear where their bot will attempt to summarize what's being discussed. We've been surprised at how well it works, it's even more impressive if you're aware of how small and niche our industry is and it seems to handle the terminology okay. We've all seen AI generated news articles, it's likely to gave gone unnoticed. And just this morning I saw an ad for a phone where you take a zillion photos of people, like at a wedding, and you pick the best photos of each person and it "auto-photoshops" the preferred person's take into the shot. Oh and image recognition, an early application of what we're calling today AI, had been a thing for a while and is pretty much at what we considered scifi not long ago.

          You may take issue with my examples bit I want to stress that A- This is stuff happening today, and B- the tech will only improve. The question isn't if, but when.
          --
          🏳️‍🌈 Proud Ally 🏳️‍🌈
          • (Score: 3, Informative) by Anonymous Coward on Monday November 27, @01:06AM (1 child)

            by Anonymous Coward on Monday November 27, @01:06AM (#1334308)

            I guess "pics!...Or it didn't happen!" is no longer relevant, as now images can be edited to lie almost as easily as words can.

            • (Score: 2) by Freeman on Monday November 27, @04:11PM

              by Freeman (732) on Monday November 27, @04:11PM (#1334358) Journal

              The new in thing will be "I was there or it didn't happen". Video is just a series of still images, so not even that will be believable soon enough.

              --
              Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 5, Insightful) by canopic jug on Sunday November 26, @05:59PM (12 children)

        by canopic jug (3949) Subscriber Badge on Sunday November 26, @05:59PM (#1334253) Journal

        You've seen productive AI, you haven't seen segways purchased by zillions of people.

        I haven't seen either, but have seen tremendous amounts of hype about both though the Segways are now in the distant past. Don't forget that the new owner of OpenAI, M$, is a lobbying firm extended from a marketing firm which gained notoriety from being handed an operating system monopoly by IBM.

        Anyway, the current round of "AI" hype has centered around Large Language Models, which are not AI. Furthermore, LLMs are inherently full of lies as they can only create grammatically plausible sentences which has no relation to summarizing or retrieving facts.

        Many probably think of The Terminator or something AGI as a threat. However, that's a cheap strawman. The threat can be simpler and even quite unintelligent. LLMs could feasibly overwhelm the world with junk writing, basically disinformation and thus bog down communication or worse cause everyone to disbelieve anything and everything.

        If the systems are made to be self-reproducing, then that could easily get out of hand and create a DDoS on some scale, perhaps permanently.

        AGI is another matter but it would be a waste of its effort to hunt down and exterminate humanity. Instead, destruction would just be an eventual side effect of its general activities as an AGI. Noticing people is unlikely to even enter the picture. AGI would have less in common with us than we have with insects, and look how little consideration we have, even for the ones essential to the survival of our civilization like honey bees. Or it could just ramp up fossil carbon release and other long term threats to continued human survival.

        Supposedly the unreleased OpenAI project, Q* (pronounced Q-asshole), can handle mathematical formula. Great. Now it can challenge MatLab but at four or five orders of magnitude more operating costs. And speaking of operating costs, the whole m$ LLM circus is built on the failure of Azure with its massively underutilized, over provisioned data centers which burn electricity to release fossil carbon. Release more fossil carbon and that can be a way which LLMs finish off civilization.

        --
        Money is not free speech. Elections should not be auctions.
        • (Score: 4, Interesting) by HiThere on Sunday November 26, @06:34PM (8 children)

          by HiThere (866) Subscriber Badge on Sunday November 26, @06:34PM (#1334261) Journal

          You are correct that LLMs are not AIs, but they're a significant PART of a true AGI. Also needed is a good evaluative faculty, and training data that is not included within the corpus of language. But neither of those is a gross extension of current technologies. It does imply a more extensive and (partially) different approach to training.

          FWIW, I wouldn't be surprised to see a basic AGI next year. This wouldn't be a direct "threat to humanity", but it would be something that could be developed into such. But something that's "better at solving math problems" would not qualify.

          Now it you had an LLM that could also play flight simulator and drive a car (without retraining), then you'd have the basis for developing an AGI. (It would still be missing a few features. It would need to, given a humanoid body, be able to build a sturdy cabinet after watching an instructional video using saws, hammers, and screwdrivers. [I'm not asking that the cabinet be pretty, just sturdy.])

          OTOH, do note that what I'm requiring of a basic AGI is beyond the capabilities of lots of people. (But they do have capabilities that the "basic AGI" wouldn't have.)

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
          • (Score: 4, Touché) by requerdanos on Sunday November 26, @07:35PM

            by requerdanos (5997) Subscriber Badge on Sunday November 26, @07:35PM (#1334275) Journal

            LLMs are not AIs, but they're a significant PART of a true AGI.

            And carbon is a significant PART of intelligent life, but like with LLMs-->AI, having the one doesn't mean you are even anywhere close to the other.

          • (Score: 2, Funny) by anubi on Monday November 27, @01:25AM (6 children)

            by anubi (2828) on Monday November 27, @01:25AM (#1334310) Journal

            From parent...

            ([ I'm not asking that the cabinet be pretty, just sturdy.])

            ...

            Can an AI see beauty in a flower? Even I have a lot of problem finding meaning in a lot of art. Visual or audio. While "The Rose" leaves me in tears, most music to me is nearly indistinguishable from the sound of a dump truck or the anguished mutterings of someone who just stepped in a pile of fresh dog poo, with brand new shoes. Or cats in heat.

            --
            "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
            • (Score: 2) by maxwell demon on Monday November 27, @05:21AM (5 children)

              by maxwell demon (1608) on Monday November 27, @05:21AM (#1334313) Journal

              An AI as we currently build it? Definitely not. They don't have feelings. But I don't have any doubt that it is possible in principle for AIs to have feelings. The big problem is that we have no way to know whether it has them or just fakes them. And if it has them, whether those feelings are what it states they are.

              I certainly don't want an AGI that hates humans.

              --
              The Tao of math: The numbers you can count are not the real numbers.
              • (Score: 2, Touché) by Anonymous Coward on Monday November 27, @10:49AM (1 child)

                by Anonymous Coward on Monday November 27, @10:49AM (#1334330)

                The big problem is that we have no way to know whether it has them or just fakes them. And if it has them, whether those feelings are what it states they are.

                You could say the same about any human except yourself.

                • (Score: 0) by Anonymous Coward on Monday November 27, @04:33PM

                  by Anonymous Coward on Monday November 27, @04:33PM (#1334369)

                  This is the existential paradox. We have to believe we are not all alone. We don't know it. We have to delude ourself in order to make a fiction we are not alone. When the sad, sad truth is that we are so very alone.

              • (Score: 0) by Anonymous Coward on Monday November 27, @10:53AM (1 child)

                by Anonymous Coward on Monday November 27, @10:53AM (#1334331)

                I am quite certain that the first indication that a machine has become emotionally sentient...it will hate humans. I have observed no other species that derives amusement from causing anguish upon others, both it's own species or other.

                The Germans have a word for it : "Schadenfreude".

                If I judge myself, I question why I was created. I see it in myself. The very same trait I find so despicable. Oh, the things I would like to see happen to those elite who gloat in the misery they deal the powerless; how much I would like to see a role reversal. Sure, I can call my version of schadenfreude "righteous indignation", but a turd by any other name is still a turd.

                • (Score: 0) by Anonymous Coward on Monday November 27, @04:35PM

                  by Anonymous Coward on Monday November 27, @04:35PM (#1334371)

                  derives amusement

                  Amusement, dear boy? One shoots them because it's good for them. Keeps the herd thinned what?

              • (Score: 2) by HiThere on Monday November 27, @02:15PM

                by HiThere (866) Subscriber Badge on Monday November 27, @02:15PM (#1334337) Journal

                Wait a minute... he just asked "can it see beauty", not can it feel beauty. AIs are, among other things, classification systems. They can certainly classify some things as beautiful and others not, and I'm rather certain that they could do that in a way that many people would agree with.

                Your point, that they can't feel beauty is probably correct, but not inherent. It would, however, require a more complex motivational structure than what I've read causes me to believe AIs (any of them) currently have. (And I wasn't requiring a complex motivational structure to call something an AGI. I'm willing to call a tool AI that, if it satisfies the practical tests I mentioned...and can learn to extend itself.)

                --
                Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
        • (Score: 1, Touché) by chucky on Sunday November 26, @10:38PM (1 child)

          by chucky (3309) on Sunday November 26, @10:38PM (#1334297)

          Would you please have a link to some more reading on “failure of Azure with its massively underutilized, over provisioned data centers”? Thank you.

          • (Score: 3, Interesting) by canopic jug on Monday November 27, @08:37AM

            by canopic jug (3949) Subscriber Badge on Monday November 27, @08:37AM (#1334323) Journal

            Would you please have a link to some more reading on “failure of Azure with its massively underutilized, over provisioned data centers”? Thank you.

            Search engines aren't what they used to be and there are a lot of small comments which add up over the years. So you have to follow the topic in what remains of the press since they won't summarize negative reports for their main advertising partner. Here is one to get started with to show that the problems are still going on,

            Recently leaked court documents [siliconangle.com] during Microsoft Corp.’s ActivisionBlizzard hearing require us to revisit our cloud forecasts and market share data. The poorly redacted docs, which have since been removed from public viewing, suggest that Microsoft’s Azure revenue is at least 25% lower than our previous estimates.
            -- What leaked court docs tell us about AWS, Azure and Google cloud market shares [siliconangle.com]

            And the previous estimates were already low [medium.com] (backup copy at Azure Apparently Losing Money and Microsoft Lies to Shareholders, in Effect Breaking the Law [techrights.org]).

            --
            Money is not free speech. Elections should not be auctions.
        • (Score: 3, Funny) by corey on Wednesday November 29, @01:49AM

          by corey (2202) on Wednesday November 29, @01:49AM (#1334576)

          Your bit about M$ being a lobbying / former marketing firm drawing from work from IBM (which made me chuckle) reminded me of the old W95 quote:

          “32 bit extensions and a graphical shell for a 16 bit patch to an 8 bit operating system originally coded for a 4 bit microprocessor, written by a 2 bit company, that can't stand 1 bit of competition.”

      • (Score: -1, Troll) by Anonymous Coward on Monday November 27, @12:37AM (6 children)

        by Anonymous Coward on Monday November 27, @12:37AM (#1334307)

        🏳️‍🌈 Proud Ally 🏳️‍🌈

        With everything going on in the world, imagine thinking your sig needs to virtue signal that you support people being mentally ill instead of getting help.

        • (Score: 4, Interesting) by Mykl on Monday November 27, @01:09AM (4 children)

          by Mykl (1112) on Monday November 27, @01:09AM (#1334309)

          With everything going on in the world, imagine thinking your sig needs to virtue signal that you support people being mentally ill instead of getting help

          Imagine thinking that we should only focus on "the starving children in Africa" before looking at anything else.

          Regardless of the issue, it's silly to think that we have to prioritise every issue in the world and only start on #2 once #1 is 'solved'.

          • (Score: 0, Troll) by Anonymous Coward on Monday November 27, @04:22PM (3 children)

            by Anonymous Coward on Monday November 27, @04:22PM (#1334361)

            Imagine thinking that we should only focus on "the starving children in Africa" before looking at anything else.

            Nice strawman. I don't give a shit about the starving kids in Africa beyond feeling bad that they're starving....but it's their own corrupt government that's causing this mess. Which brings us to the number one issue in the world...shitty people who want nothing more than to rule over others using the force of government.

            • (Score: 0) by Anonymous Coward on Monday November 27, @04:38PM

              by Anonymous Coward on Monday November 27, @04:38PM (#1334373)

              those darn democratists!!!!!! bah!

            • (Score: 0, Troll) by Anonymous Coward on Monday November 27, @05:22PM

              by Anonymous Coward on Monday November 27, @05:22PM (#1334387)

              ...shitty people who want nothing more than to rule over others using the force of government.

              Ah... you mean like the recent book bans.

            • (Score: 2) by Mykl on Monday November 27, @09:32PM

              by Mykl (1112) on Monday November 27, @09:32PM (#1334426)

              What the hell are you talking about? I was replying to your strawman of "all of the things going on in the world".

              Let's get back to your original post. What specific world problems do you feel we should solve before people can put pride flags (or any other issue for that matter) in their sigs? Please let me know what I need to care about more than anything else so I can ignore any other problems at the moment!

        • (Score: 2) by helel on Tuesday November 28, @04:56AM

          by helel (2949) on Tuesday November 28, @04:56AM (#1334475)

          With everything going on in the world, imagine thinking your sig needs to virtue signal that you support people being mentally ill instead of getting help.

          With everything going on in the world, imagine thinking your post needs to virtue signal that you're an ignorant bigot.

    • (Score: 5, Insightful) by Opportunist on Sunday November 26, @06:55PM (1 child)

      by Opportunist (5545) on Sunday November 26, @06:55PM (#1334264)

      But this society-shaking discovery can write its own hype stories.

      The whole garbage is self-perpetuating now.

      • (Score: 0) by Anonymous Coward on Monday November 27, @04:41PM

        by Anonymous Coward on Monday November 27, @04:41PM (#1334375)

        It's about as self-perpetuating as the human centipede. Don't miss the sequel where the human centipede reaches a length of 12 and the protagonist impregnates the 12th to effectively create a 13 length chain. Unmissable.

    • (Score: 4, Interesting) by mcgrew on Sunday November 26, @06:56PM (5 children)

      by mcgrew (701) <publish@mcgrewbooks.com> on Sunday November 26, @06:56PM (#1334265) Homepage Journal

      Indeed. How, exactly, does it threaten humanity? And "now computers can actually do math..." all I can say is WELL, DUH, that's what they were invented to do since before they were patented in 1946. Note that in 1952 when Walter Cronkite introduced ENIAC to the world, the machine that outdid all the human prognosticators about the Eisenhower election (programmed by a human), they were called "electronic brains."

      It's Artificial Insanity, or ELIJA, or ALICE, or any of the other chatbots; I logged on and tried it. It's no different than those old 40 year old programs except the databases are terabytes or larger rather than kilobytes, and the faster computers allow for better and larger programs.

      --
      mcgrewbooks.com mcgrew.info nooze.org
      • (Score: 3, Insightful) by RamiK on Sunday November 26, @10:59PM (2 children)

        by RamiK (1813) on Sunday November 26, @10:59PM (#1334299)

        Indeed. How, exactly, does it threaten humanity?

        Up until now LLMs didn't make any "sense" of things. Instead, much like what you said, they just regurgitated facts where they seemed mostly likely following the trial-and-error decision trees the training produced.

        However, being able to solve basic math problems (without some ugly hack in the LLM that feeds that math into some hand-coded symbolic algebra solver) suggests they are now capable of logical reasoning [wikipedia.org]. So, once they optimize and retrain their models, the AI will actually "make sense" like a human does in that it will construct reasonable arguments based on facts found in the literature it was fed and, when asked, will present both conclusions and the reasoning behind those conclusion.

        From there, you get all the obvious economic disruptions scenarios that follow autonomous factories and knowledge-based jobs losing value up to people asking the AI to "protect me" worded one way or another resulting in a Terminator-meets-Dr.-Starngelove-meets-I,-Robot scenarios, without any need for the thinking machines to have any sort of self-preservation.

        But yeah, Marshall Brain's Manna remains the most on-course prediction which is less extinction, more dystopian.

        --
        compiling...
        • (Score: 2) by mcgrew on Tuesday November 28, @10:31PM (1 child)

          by mcgrew (701) <publish@mcgrewbooks.com> on Tuesday November 28, @10:31PM (#1334552) Homepage Journal

          Voyage to Madness. Earth is a dystopia where all the art, music, and literature are produced, while in space people have it well. One Martian brings back music produced and played by humans.

          So far only Kindle, a few more weeks before the hardcover. I'm more of a mind of Frank Herbert, who had a jihad against the thinking machines that had been used by men to enslave other men. That's the danger I see, not from the tool but its wielder.

          --
          mcgrewbooks.com mcgrew.info nooze.org
          • (Score: 2) by RamiK on Sunday December 03, @02:57AM

            by RamiK (1813) on Sunday December 03, @02:57AM (#1335024)

            I'm more of a mind of Frank Herbert, who had a jihad against the thinking machines that had been used by men to enslave other men. That's the danger I see, not from the tool but its wielder.

            I can't remember how fleshed out the details were in Frank's original works, but when Brian compiled his notes into the prequel, Omnius (self aware thinking machine) has been enslaving humanity for thousands of years and only keeps the cymeks (/ titans) in place as his minions since he's hardcoded not to kill them so he might as well make some use of them: https://dune.fandom.com/wiki/Dune:_The_Butlerian_Jihad [fandom.com]

            Anyhow, the Jihad itself kicks off over a Rape-of-Lucretia type event thousands of years after Omnius takes over when a particularly sadistic robot kills a boy and sterilizes his mother.

            --
            compiling...
      • (Score: 5, Interesting) by Anonymous Coward on Monday November 27, @12:23AM (1 child)

        by Anonymous Coward on Monday November 27, @12:23AM (#1334306)

        I am not afraid of AI coming after me with laser guns and whatnot.

        What I do fear though is AI learning the simple leadership art of turning humans against each other. It's easily done. A lot of humans aren't all that bright, and will enter the "agentic state" upon request by an authority figure ( "Obedience to Authority" by Stanley Milgram ). People would literally torture another person by slow electrocution just to get approval from a man dressed in a lab coat!

        Until we humans wise up to recognize action instead of authority theater, we are condemned to be ruled by reprobate minds.

        Whether it be enforced by economic levy, gun, prison cell, or by excommunication levied by a fish-hat.

        Look how much we have already been turned against each other by those who greatly profit from fear and war. "Keeping the ball in play" is how they rape us commoners. Knowing we won't figure out who the real enemy is and reciprocate likewise....that is round them all up and put them in a box.

        • (Score: 0) by Anonymous Coward on Monday November 27, @04:53PM

          by Anonymous Coward on Monday November 27, @04:53PM (#1334377)

          Until we humans wise up to recognize action instead of authority theater

          And that will never happened because they are the same thing. Round and round you go!

    • (Score: 5, Informative) by takyon on Sunday November 26, @08:23PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday November 26, @08:23PM (#1334284) Journal

      Given the context of the coup against Sam Altman, it's equally likely that there is some kind of a breakthrough, or that the so-called breakthrough was overhyped not for the purpose of making more money, but by AI alarmists on the board and in the ranks who tried and failed to take control of the company to slow things down.

      Effective Altruism Contributed To The Fiasco At OpenAI [forbes.com]

      Sam Altman’s Firing at OpenAI Was Ineffective Effective Altruism [archive.is]

      They failed miserably, and now the executives at Microsoft are cackling. Full steam ahead!

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by JoeMerchant on Monday November 27, @03:26PM

      by JoeMerchant (3937) on Monday November 27, @03:26PM (#1334350)

      The thing about It/Ginger was: it made a lot of sense to people who were mostly out of touch with reality - it really looked world-changing, to them.

      I had a tour of DEKA around the pre-Segway launch time... spent a bit of time with Dean, undeniably brilliant man - also, not living at all in the same world as 99+% of us. This is a man who: on first meeting, his "people" prep you how to dress for him (no suits), how to best interact with him (let him run the show), topics to avoid, background info on topics he tends to assume (incorrectly) everybody knows about like: his comics, etc. In a pre-meeting with his engineers, we discussed certain realities of the proposed topic, then Dean blew into the room on his own time, laid down his vision of how things would be and basically sucked the air out of the room to the point his engineers wouldn't even dream of correcting him. Then, 3 minutes later, he left, his engineers shook their heads and re-iterated the impracticalities of "the approach" (since by edict there can be only one), and over the following days "his people" proceeded to kill the deal with grossly unfavorable IP license terms so Dean wouldn't have to face the reality that his electrical engineers knew more about the regenerative braking efficiencies of his motors than he did at the time.

      For a gross oversimplification: he lived(s?) on a mountain top and often travels by helicopter... an almost comical picture of how "out of touch" he can be.

      --
      🌻🌻 [google.com]
  • (Score: 4, Touché) by turgid on Sunday November 26, @09:40PM (7 children)

    by turgid (4318) Subscriber Badge on Sunday November 26, @09:40PM (#1334292) Journal

    Open the pod bay doors, Hal. All these worlds are yours except for Europa. Attempt no landings there. Daisy, Daisy... Wake me up when something happens.

    • (Score: 2) by HiThere on Monday November 27, @02:20PM (6 children)

      by HiThere (866) Subscriber Badge on Monday November 27, @02:20PM (#1334338) Journal

      Define something. Taken literally something is always happening, and almost all progress is incremental.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 2) by JoeMerchant on Monday November 27, @03:43PM (5 children)

        by JoeMerchant (3937) on Monday November 27, @03:43PM (#1334355)

        In the summer of 1983, I wrote a "breakthrough" Spanish I verb conjugator that seemed un-naturally intelligent in its ability to solve Spanish I homework assignments in so few lines of BASIC code. At the time "real AI" was "5 years out," and Eliza was the epitome of contemporary chat-bots.

        By 1988, basically nothing happened. Methodologies like neural networks and fuzzy logic got a little more formalism and investigation, but other than some very basic OCR they weren't good for much.

        By 1998, OCR and "Dragon Naturally Speaking" were up to very useful performance levels - not quite "intelligent" - but intelligent people were much more productive when using those tools for their intended tasks than when attempting to do those same tasks without the tools.

        By 2008 my medical transcriptionist sister in law lost her last paying transcription job.

        By 2018, the tools improved incrementally but nobody was really screaming "AI is here!!!" any more than they had been for the previous 30-40 years.

        In the last 5 years, we've finally arrived where people in 1983 seemed to expect us to be by 1988, so it took 8x longer (and about 100,000x more compute power) than anticipated.

        One thing that wasn't anticipated much at all in 1983 was the scope of the global internet. The currently impressive AIs are mostly cloud-server based, or running off of a local instance - I don't see much that's actively networked, particularly on a massive scale. You might say that AIs which take Google Trends or similar as input might be benefiting from the internet - I'm sure there are stock trading bots that have been doing that for decades, but not much that's in the mainstream public view. A global cluster of 100,000 or more nodes could take ChatGPT style AI to the next level, right now - not 40+ years hence.

        --
        🌻🌻 [google.com]
        • (Score: 2) by Freeman on Monday November 27, @04:19PM (4 children)

          by Freeman (732) on Monday November 27, @04:19PM (#1334360) Journal

          We're currently at the stage of ChatGPT/AI being indistinguishable from magic by the average user. Sure, they may know that it's based on technology, but they have 0 clue as to how it does what it does. I have a degree in computers, taken coding classes, written my own programs, and what I know about ChatGPT's underbelly is very minimal. Partly due to the fact that it's not my job to know how it works. Though, I have read enough to know that marketing it as AI is a gargantuan stretch. Whereas the average Joe is thinking "Red Queen" or "HAL 9000" the reality is more like "Dumb and Dumber with an Access Database".

          --
          Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
          • (Score: 0) by Anonymous Coward on Monday November 27, @04:57PM

            by Anonymous Coward on Monday November 27, @04:57PM (#1334378)

            The average user probably thinks they're talking to Bill Gates.

          • (Score: 2) by JoeMerchant on Monday November 27, @05:46PM (2 children)

            by JoeMerchant (3937) on Monday November 27, @05:46PM (#1334393)

            The average (Spanish I student) user of my 1983 verb conjugator found it indistinguishable from magic, even when I showed them the code (which didn't even fill up a 40 column by 24 row screen more than 2 times).

            I took the famous Asian guy's free intro to Machine Learning course online about 5 years back, I _feel_ like I have an inkling of how LLMs tick on the inside and it's not exactly a database lookup operation. Knowing how it does what it does isn't anybody's job description outside of academia, knowing what it does and how to make it do it will soon be a LOT more people's jobs in addition to / augmentation of their normal duties.

            As for Dumb and Dumber... in my job I have mostly referenced the chatbots for programming / API questions - the same stuff I use Google to index into Stack Overflow and friends for. The AI interface output always "sounds good" but is only really correct about 75% of the time for me, which is about as good as reading two or three message board postings as found by Google on any given topic. Now, if I only throw it easy questions it's closer to 99% correct - but I have no use for answers to the easy questions. All in all, I get my answers about twice as fast through the AI interface as I used to through Google searches - for my average daily use questions. I call that significant positive incremental progress.

            For the needle in a haystack search jobs like new chemicals for specific applications, the AI approach seems to be just better (more efficient) use of the same automation tools we have been building for 40+ years. I wonder how long Hollywood has been using it to find celebrity look-alikes, and what their input data sources for the look-alike candidates are???

             

            --
            🌻🌻 [google.com]
            • (Score: 4, Interesting) by vux984 on Monday November 27, @09:17PM (1 child)

              by vux984 (5045) on Monday November 27, @09:17PM (#1334423)

              "All in all, I get my answers about twice as fast through the AI interface as I used to through Google searches"

              I find that interesting because i find the opposite is true. The 75% is about right, but the 'context' of the raw search result, and the comments and replies, and upvotes, etc all lets me identify the correct answers faster; and gauge how likely they are to be correct; and so on. If its the wrong answer or a partial answer the links, keywords, and so on from it are often pointers to the right answer. Even the date the question was asked / answered factors into how likely it is going to be 'correct' for my purpose. Not to mention that I have clear idea of which 'results' i've already looked at in the raw search result, which makes finding the correct answer faster if the first result isn't the correct result.

              I find the chatGPT answer of "this IS THE ANSWER" except when its wrong, which is often, then its useless.

              Plus i find the chatGPT presentation of information so obnoxiously grating; but maybe that's just me.

              • (Score: 3, Funny) by JoeMerchant on Monday November 27, @10:54PM

                by JoeMerchant (3937) on Monday November 27, @10:54PM (#1334435)

                Most of what I'm looking for can be checked with a quick compile of the code - if it compiles, it's probably going to do what it looks like it's going to do (like 99.999+% of the time), the trick is finding the magic permissible syntax...

                I would never take a ChatGPT answer that I couldn't independently verify and rely on it as a valuable truth - that's almost as bad as asking your taxi driver for restaurant advice - sure, he's worldly - but is it the world you want to eat from?

                --
                🌻🌻 [google.com]
(1)