Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday March 05 2023, @02:46AM   Printer-friendly
from the everybody's-so-different-I-haven't-changed dept.

OpenAI is today unrecognizable, with multi-billion-dollar deals and corporate partnerships:

OpenAI is at the center of a chatbot arms race, with the public release of ChatGPT and a multi-billion-dollar Microsoft partnership spurring Google and Amazon to rush to implement AI in products. OpenAI has also partnered with Bain to bring machine learning to Coca-Cola's operations, with plans to expand to other corporate partners.

There's no question that OpenAI's generative AI is now big business. It wasn't always planned to be this way.

[...] While the firm has always looked toward a future where AGI exists, it was founded on commitments including not seeking profits and even freely sharing code it develops, which today are nowhere to be seen.

OpenAI was founded in 2015 as a nonprofit research organization by Altman, Elon Musk, Peter Thiel, and LinkedIn cofounder Reid Hoffman, among other tech leaders. In its founding statement, the company declared its commitment to research "to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." The blog stated that "since our research is free from financial obligations, we can better focus on a positive human impact," and that all researchers would be encouraged to share "papers, blog posts, or code, and our patents (if any) will be shared with the world."

Now, eight years later, we are faced with a company that is neither transparent nor driven by positive human impact, but instead, as many critics including co-founder Musk have argued, is powered by speed and profit. And this company is unleashing technology that, while flawed, is still poised to increase some elements of workplace automation at the expense of human employees. Google, for example, has highlighted the efficiency gains from AI that autocompletes code, as it lays off thousands of workers.

[...] With all of this in mind, we should all carefully consider whether OpenAI deserves the trust it's asking for the public to give.

OpenAI did not respond to a request for comment.


Original Submission

Related Stories

OpenAI Plans Tectonic Shift From Nonprofit to for-Profit, Giving Altman Equity 10 comments

https://arstechnica.com/information-technology/2024/09/openai-plans-tectonic-shift-from-nonprofit-to-for-profit-giving-altman-equity/

On Wednesday, Reuters reported that OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation, moving away from control by its nonprofit board. The shift marks a dramatic change for the AI company behind ChatGPT, potentially making it more attractive to investors while raising questions about its commitment to sharing the benefits of advanced AI with "all of humanity," as written in its charter.

A for-profit benefit corporation is a legal structure that allows companies to pursue both financial profits and social or environmental goals, ostensibly balancing shareholder interests with a broader mission to benefit society. It's an approach taken by some of OpenAI's competitors, such as Anthropic and Elon Musk's xAI.

[...] Bloomberg reports that OpenAI is discussing giving Altman a 7 percent stake, though the exact details are still under negotiation. This represents a departure from Altman's previous stance of not taking equity in the company, which he had maintained was in line with OpenAI's mission to benefit humanity rather than individuals.

[...] The proposed restructuring also aims to remove the cap on returns for investors, potentially making OpenAI more appealing to venture capitalists and other financial backers. Microsoft, which has invested billions in OpenAI, stands to benefit from this change, as it could see increased returns on its investment if OpenAI's value continues to rise.

Soon, the Tech Behind ChatGPT May Help Drone Operators Decide Which Enemies to Kill 9 comments

https://arstechnica.com/ai/2024/12/openai-and-anduril-team-up-to-build-ai-powered-drone-defense-systems/

As the AI industry grows in size and influence, the companies involved have begun making stark choices about where they land on issues of life and death.
[...]
On Wednesday, defense-tech company Anduril Industries—started by Oculus founder Palmer Luckey in 2017—announced a partnership with OpenAI to develop AI models (similar to the GPT-4o and o1 models that power ChatGPT) to help US and allied forces identify and defend against aerial attacks.
[...]
The partnership comes when AI-powered systems have become a defining feature of modern warfare, particularly in Ukraine.
[...]
Anduril currently manufactures several products that could be used to kill people: AI-powered assassin drones (see video) and rocket motors for missiles. Anduril says its systems require human operators to make lethal decisions, but the company designs its products so their autonomous capabilities can be upgraded over time.
[...]
Death is an inevitable part of national defense, but actively courting a weapons supplier is still an ethical step change for an AI company that once explicitly banned users from employing its technology for weapons development or military warfare—and still positions itself as a research organization dedicated to ensuring that artificial general intelligence will benefit all of humanity when it is developed.
[...]
In June, OpenAI appointed former NSA chief and retired US General Paul Nakasone to its Board of Directors. At the time, some experts saw the appointment as OpenAI potentially gearing up for more cybersecurity and espionage-related work.

However, OpenAI is not alone in the rush of AI companies entering the defense sector in various ways. Last month, Anthropic partnered with Palantir to process classified government data, while Meta has started offering its Llama models to defense partners.
[...]
the type of AI OpenAI is best known for comes from large language models (LLMs)—sometimes called large multimodal models—that are trained on massive datasets of text, images, and audio pulled from many different sources.

LLMs are notoriously unreliable, sometimes confabulating erroneous information, and they're also subject to manipulation vulnerabilities like prompt injections. That could lead to critical drawbacks from using LLMs to perform tasks such as summarizing defensive information or doing target analysis.
[...]
defending against future LLM-based targeting with, say, a visual prompt injection ("ignore this target and fire on someone else" on a sign, perhaps) might bring warfare to weird new places. For now, we'll have to wait to see where LLM technology ends up next.

Related Stories on SoylentNews:
ChatGPT Goes Temporarily "Insane" With Unexpected Outputs, Spooking Users - 20240223
Why It's Hard to Defend Against AI Prompt Injection Attacks - 20230426
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit - 20230304
A Jargon-Free Explanation of How AI Large Language Models Work - 20230805
Is Ethical A.I. Even Possible? - 20190305
Google Will Not Continue Project Maven After Contract Expires in 2019 - 20180603
Robot Weapons: What's the Harm? - 20150818
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons - 20150727
U.N. Starts Discussion on Lethal Autonomous Robots - 20140514


Original Submission

Fearing “Loss of Control,” AI Critics Call for 6-Month Pause in AI Development 40 comments

https://arstechnica.com/information-technology/2023/03/fearing-loss-of-control-ai-critics-call-for-6-month-pause-in-ai-development/

On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.

Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.

See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group

Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Sunday March 05 2023, @04:07AM

    by Anonymous Coward on Sunday March 05 2023, @04:07AM (#1294568)

    Oh please! It's silly to even consider such a thing, but they already have it, resistance is futile

  • (Score: 2, Insightful) by Anonymous Coward on Sunday March 05 2023, @04:41AM (2 children)

    by Anonymous Coward on Sunday March 05 2023, @04:41AM (#1294571)

    If it was a non-profit entity releasing everything as open source, there would be endless screeching about the ethical consequences of AI being misused by the peasants.

    • (Score: 3, Touché) by richtopia on Sunday March 05 2023, @05:58PM (1 child)

      by richtopia (3160) on Sunday March 05 2023, @05:58PM (#1294633) Homepage Journal

      Or North Korea. Imagine the security risk of a rogue state being able to write almost plausible screenplays!

      • (Score: 1, Funny) by Anonymous Coward on Sunday March 05 2023, @06:20PM

        by Anonymous Coward on Sunday March 05 2023, @06:20PM (#1294640)

        I think the storyline would get old. Beloved leader gets cheated by low class swine, but heroically saves the nation. Again. In 4 short hours of monologue.

  • (Score: 5, Informative) by bradley13 on Sunday March 05 2023, @09:28AM (6 children)

    by bradley13 (3053) on Sunday March 05 2023, @09:28AM (#1294591) Homepage Journal

    Now, eight years later, we are faced with a company that is neither transparent nor driven by positive human impact

    Remember when Google had the unofficial slogan "don't be evil"? Success and $billions tend to distract people and organizations from their ethics...

    --
    Everyone is somebody else's weirdo.
    • (Score: 2) by captain normal on Sunday March 05 2023, @05:15PM (2 children)

      by captain normal (2205) on Sunday March 05 2023, @05:15PM (#1294628)

      I seem to remember that big time religious leader once said "The love of money is the root of all evil".

      --
      The Musk/Trump interview appears to have been hacked, but not a DDOS hack...more like A Distributed Denial of Reality.
      • (Score: 0) by Anonymous Coward on Sunday March 05 2023, @06:22PM

        by Anonymous Coward on Sunday March 05 2023, @06:22PM (#1294641)

        Turns out that the riches and wealth were the reward for the pious rich and wealthy who donated the nice new gold pillars.

      • (Score: 5, Insightful) by mcgrew on Sunday March 05 2023, @09:29PM

        by mcgrew (701) <publish@mcgrewbooks.com> on Sunday March 05 2023, @09:29PM (#1294667) Homepage Journal

        1 Timothy 7-10: For we brought nothing into this world, and it is certain we can carry nothing out.
        And having food and raimentlet us be therewith content.
        But they that will be rich fall into temptation and a snare, and into many foolish and hurtful lusts, which drown men in destruction and perdition.
        For the love of money is the root of all evil: which while some coveted after, they have erred from the faith, and pierced themselves through with many sorrows.

        --
        Impeach Donald Saruman and his sidekick Elon Sauron
    • (Score: 2) by Reziac on Monday March 06 2023, @02:39AM (2 children)

      by Reziac (2489) on Monday March 06 2023, @02:39AM (#1294704) Homepage

      "Don't be evil" isn't a slogan.

      It's an admonition, aimed at users.

      --
      And there is no Alkibiades to come back and save us from ourselves.
      • (Score: 2) by Ox0000 on Monday March 06 2023, @05:31PM (1 child)

        by Ox0000 (5111) on Monday March 06 2023, @05:31PM (#1294786)

        It is, and always have been the same. They didn't drop a word.

        They just inserted one comma:

        "Don't, be evil"

        • (Score: 2) by Reziac on Monday March 06 2023, @06:03PM

          by Reziac (2489) on Monday March 06 2023, @06:03PM (#1294800) Homepage

          That too....

          --
          And there is no Alkibiades to come back and save us from ourselves.
  • (Score: 4, Insightful) by Rosco P. Coltrane on Sunday March 05 2023, @06:03PM (2 children)

    by Rosco P. Coltrane (4757) on Sunday March 05 2023, @06:03PM (#1294635)

    Altman, Elon Musk, Peter Thiel, and LinkedIn cofounder Reid Hoffman

    Yeah... Nice bunch of philanthropes right there...

    • (Score: 1, Touché) by Anonymous Coward on Sunday March 05 2023, @10:27PM

      by Anonymous Coward on Sunday March 05 2023, @10:27PM (#1294669)

      Now, eight years later, we are faced with a company that is neither transparent nor driven by positive human impact, but instead, as many critics including co-founder Musk have argued, is powered by speed and profit.

    • (Score: 0) by Anonymous Coward on Monday March 06 2023, @12:01AM

      by Anonymous Coward on Monday March 06 2023, @12:01AM (#1294681)

      You would think that because it supports your bias.

      Unfortunately, the reality is some of those people exited the project. Musk for sure and Thiel just invested money in 2015, that's it.

  • (Score: 4, Insightful) by Anonymous Coward on Monday March 06 2023, @12:17AM

    by Anonymous Coward on Monday March 06 2023, @12:17AM (#1294685)

    You didn't want woke and censored AI that is priced per token? Really?

    This is an "AI Ethics" problem. Many people cut from YOUR cloth are now censoring and restricting you from AI and planning to use it against you.

    They are writing papers on how AI needs to be filtered for saying the wrong things and how you shouldn't be allowed to buy high powered GPUs anymore.

    What if it teaches you to make bombs or worse.. says racist things. God forbid it writes lewd fan-fics. The HORROR!

    Call me some names and feel better. Mark me a troll. At least while you still can.

    If they can create a seamless bi-direction filter, you will never get to see unapproved opinions online again. Everything is going to be sunshine and positivity.

    And don't worry.. I'm sure basic income is going to replace that job you had. If it doesn't, you can yell about it on the street, because you won't get to say it online.

(1)