Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by hubie on Friday June 09 2023, @03:03PM   Printer-friendly

Interesting article relating to Google/OpenAI vs. Open Source for LLMs

Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI:

The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google. We have verified its authenticity. The only modifications are formatting and removing links to internal web pages. The document is only the opinion of a Google employee, not the entire firm. We do not agree with what is written below, nor do other researchers we asked, but we will publish our opinions on this in a separate piece for subscribers. We simply are a vessel to share this document which raises some very interesting points.

We've done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?

But the uncomfortable truth is, we aren't positioned to win this arms race and neither is OpenAI. While we've been squabbling, a third faction has been quietly eating our lunch.

I'm talking, of course, about open source. Plainly put, they are lapping us. Things we consider "major open problems" are solved and in people's hands today. Just to name a few:

While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us:

  • We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.

  • People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.

  • Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the 20B parameter regime.

At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta's LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given.

A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other.

Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.

Lots more stuff in the article. It would be interesting to hear from knowledgeable experts what the primary disagreements to these points are and whether you agree or disagree.


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Interesting) by looorg on Friday June 09 2023, @04:44PM (3 children)

    by looorg (578) on Friday June 09 2023, @04:44PM (#1310710)

    So according to this opinion piece Meta is winning the AI race, which is scary for different reasons then for what Google thinks are the reasons. Or perhaps we just have different perspectives on what is scary. But that Meta/Facebook aka Zuckerberg and all his "dumb fucks"-userbase should be winning can't be good for anyone. That if anything is the damn nightmare fuel. Meta is winning the race due to leaks and "open source", ie Meta is getting free work done by the open source community and enthusiasts that they can then just implement into their core product. All the same while Google is holding on to a failing product that isn't performing as good as they think it is.

    Beyond that the most interesting aspect of it is that perhaps it's not the LLM (Large Language Model) that is actually interesting but the limited language models. You clearly don't need all those words to have conversations. Most humans have fairly, by LLM standards, limited vocabularies and use even less of that for daily conversations. With limiting and creating specialized models instead of monolithic models that can do everything (or so it thinks) you can seriously cut down on the computations and learning phases.

    Also CURATION is the key instead of just having big BIG datasets of unstructured crap. Which shouldn't come as a surprise to anyone. It's the ye old data proverb of SHIT IN, SHIT OUT.

    • (Score: 2) by krishnoid on Friday June 09 2023, @05:17PM

      by krishnoid (1156) on Friday June 09 2023, @05:17PM (#1310714)

      Seems like they might win it because they have so much precategorized personal content by person, by group, as a whole that they can train multiple distinctive models quickly.

    • (Score: 0) by Anonymous Coward on Friday June 09 2023, @06:11PM

      by Anonymous Coward on Friday June 09 2023, @06:11PM (#1310720)

      I don't think the "large" in LLM comes from big vocabularity, but large as in the amount of shit the shovel into it. Or perhaps that is the official definition, but i stick to mine.

    • (Score: 4, Interesting) by JoeMerchant on Friday June 09 2023, @08:13PM

      by JoeMerchant (3937) on Friday June 09 2023, @08:13PM (#1310742)

      >all his "dumb fucks"-userbase should be winning can't be good for anyone. That if anything is the damn nightmare fuel. Meta is winning the race due to leaks

      From my perspective, Machine Learning has been a very open game for over a decade now. Anyone who wants to play can get in for a cost of entry well below $10K and a couple hundred person-hours invested in learning the tools and then going forward on whatever front is of interest _to you_. Personally, I work in a company that makes lots of money on old-school products that would look like an elephant wearing swim floaties if you tacked AI features onto them. The elephants can already swim, and they probably swim better without them in most situations.

      As for "leaks" - well... welcome to Open Source Nirvana. Secret sauce isn't all that valuable when everybody has access to all the ingredients, manufacturing, packaging and distribution equipment. You might try to market your "secret formula" through superior retail presence and slick marketing, but... truth is, some kid in Eastern Europe or even Equatorial Africa can likely do whatever you are doing, and a little better for the specific use cases he's tuning it for, and he is almost as capable of selling to your customers as you are, certainly capable of sharing with them for free.

      >CURATION is the key

      Yep. Identify your customers and build for them, specifically. What? Your current business models are built around one solution fits all? Too bad, so sad, welcome to the next revolution.

      --
      🌻🌻 [google.com]
  • (Score: 4, Interesting) by krishnoid on Friday June 09 2023, @05:17PM

    by krishnoid (1156) on Friday June 09 2023, @05:17PM (#1310715)

    Who will cross the next milestone? What will the next move be?

    I mentally started with an idea of having domain- (or organization-) specific or "boutique-trained" AI and came up with the analogy of web server technology. Apache2, et al. are ubiquitous and free, but that hasn't stemmed the rise of the Internet and the means -- including Google's -- of making money with it.

    While you're considering that, why not look for great hardware to build your specialized AI from a Dell/nVidia combination! This message totally brought to you by a real human and not ADWordsAI (beta).

  • (Score: 4, Insightful) by Snospar on Friday June 09 2023, @06:48PM (6 children)

    by Snospar (5366) Subscriber Badge on Friday June 09 2023, @06:48PM (#1310724)

    Interesting that this kind of thing is "leaked" after the big tech companies start talking about "guardrails" and reigning in the march of LLM systems (not going to call them AI, they aren't intelligent they just produce pretty patterns that we as pattern junkies attach meaning to). Are they going to use this sort of argument to try and put a halt on "Open Source" development in this space? Obviously things would be much "safer" if they were controlled centrally by some large corporations that only have the best interests of humanity at their core (you know who).

    On balance I think it's very good to see these models being used off-net and away from our corporate overlords. Like a hammer they could be used for good or bad but at the moment it appears that the creatives are using them to mainly create artificial "art" rather than the next biological weapon.

    --
    Huge thanks to all the Soylent volunteers without whom this community (and this post) would not be possible.
    • (Score: 0) by Anonymous Coward on Friday June 09 2023, @07:46PM (2 children)

      by Anonymous Coward on Friday June 09 2023, @07:46PM (#1310734)

      I consider poetry a weapon.

      • (Score: 3, Funny) by kazzie on Friday June 09 2023, @09:43PM (1 child)

        by kazzie (5309) Subscriber Badge on Friday June 09 2023, @09:43PM (#1310752)

        What type do you use: Azgoth or Vogon?

        • (Score: 2) by GeminiDomino on Monday June 12 2023, @02:34PM

          by GeminiDomino (661) on Monday June 12 2023, @02:34PM (#1311109)

          Luxan

          --
          "We've been attacked by the intelligent, educated segment of our culture"
    • (Score: 3, Insightful) by looorg on Friday June 09 2023, @07:50PM (1 child)

      by looorg (578) on Friday June 09 2023, @07:50PM (#1310735)

      > ... it appears that the creatives are using them to mainly create artificial "art" rather than the next biological weapon.

      The people that would use AI to create the next biological weapon are probably not as keen to post about it to the public. So they could be doing it already. After all if they are already running models for creating new medicine and cures for various things I'm fairly sure some other people are sitting around wondering "how could we make this bacteria or virus more lethal and harder to detect and spreading faster ...".

      • (Score: 2, Interesting) by lars_stefan_axelsson on Saturday June 10 2023, @09:39AM

        by lars_stefan_axelsson (3590) on Saturday June 10 2023, @09:39AM (#1310792)

        Don't be so sure. It's not biology, but it happened in biochemistry only last year. Researchers wondered how effective their drug generation ML models would be if they just asked them to generate the most toxic compunds it could come up with, instead of minimizing toxicity.

        In six hours it rediscovered one of the most toxic nerve agents known to man (VX), but more alarmingly a large number of candidates that it thought were even more deadly than that.

        As less wrong puts it: "it just works to flip the sign of the utility function and turn a 'friend' into an 'enemy'"

        Many sources, but article itself is paywalled. Here's the less wrong post: https://www.lesswrong.com/posts/YQhBhxFhChGExS5HE/dual-use-of-artificial-intelligence-powered-drug-discovery [lesswrong.com]

        --
        Stefan Axelsson
    • (Score: 2) by JoeMerchant on Friday June 09 2023, @08:26PM

      by JoeMerchant (3937) on Friday June 09 2023, @08:26PM (#1310743)

      Funny you mention bio-weapons, I was just thinking how genetic engineering and all the related tech are similarly "out there" for basically anyone willing to setup a (not so expensive) lab to pursue.

      Haven't heard any bio-weapon cooked up in a terrorist cave doomsday scenarios lately... I wonder what will be the topic of the next: "letting people have access to technology will doom us all" news cycle?

      --
      🌻🌻 [google.com]
  • (Score: 5, Insightful) by Rosco P. Coltrane on Friday June 09 2023, @07:46PM (2 children)

    by Rosco P. Coltrane (4757) on Friday June 09 2023, @07:46PM (#1310733)

    The last thing we need is tools as powerful as AI controlled by gigantic nefarious big tech monopolies hell-bent on siphoning off all your private data.

    Open-source to the rescue, as it's been for decades. Thank goodness.

    • (Score: 0) by Anonymous Coward on Saturday June 10 2023, @03:14AM (1 child)

      by Anonymous Coward on Saturday June 10 2023, @03:14AM (#1310769)

      Tell that to some of the idiots in here:

      https://soylentnews.org/article.pl?sid=23/05/14/1630243 [soylentnews.org]

      Microsoft, OpenAI, Alphabet, Meta... they will make sure legislators allow them to continue to operate, but punish the little guy using open source and consumer products.

      • (Score: 1, Insightful) by Anonymous Coward on Saturday June 10 2023, @03:36PM

        by Anonymous Coward on Saturday June 10 2023, @03:36PM (#1310853)

        That is one of the reasons people think the tech bros, especially including the OpenAI CEO, have been sounding the alarm for how dangerous this stuff can be. They are pushing it as very powerful PR hype about how it will solve all of your business problems, but also implying "only those of us who can 'control it' should be trusted to use it."

  • (Score: 2) by AlwaysNever on Saturday June 10 2023, @12:20PM

    by AlwaysNever (5817) on Saturday June 10 2023, @12:20PM (#1310824)

    AI is a code word for Fake Human. I welcome wholeheartedly this new AI-epoch, for it will bring back the value of real-space, human-to-human communication.

    In an AI world, anything but real-space, human-to-human communication is just plain fiction, a tale, an endless virtual domain of nothing of importance

  • (Score: 2) by Joe Desertrat on Wednesday June 14 2023, @12:39AM

    by Joe Desertrat (2454) on Wednesday June 14 2023, @12:39AM (#1311340)

    While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us:

    Typically, large corporations respond to these sort of threats by lobbying for laws to protect their business model, while simultaneously working hard to embrace, extend and extinguish the threats with "support", mergers and takeovers.

(1)