Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Friday February 10, @01:56PM   Printer-friendly

An AI 'Engineer' Has Now Designed 100 Chips:

[...] AI firm Synopsys has announced that its DSO.ai tool has successfully aided in the design of 100 chips, and it expects that upward trend to continue.

Companies like STMicroelectronics and SK Hynix have turned to Synopsys to accelerate semiconductor designs in an increasingly competitive environment. The past few years have seen demand for new chips increase while materials and costs have rocketed upward. Therefore, companies are looking for ways to get more done with less, and that's what tools like DSO.ai are all about.

The tool can search design spaces, telling its human masters how best to arrange components to optimize power, performance, and area, or PPA as it's often called. Among those 100 AI-assisted chip designs, companies have seen up to a 25% drop in power requirements and a 3x productivity increase for engineers. SK Hynix says a recent DSO.ai project resulted in a 15% cell area reduction and a 5% die shrink.

[...] With all the AI innovations of late, it is starting to feel like a sea change in how we create things. OpenAI's ChatGPT, now embedded in Microsoft's products, can write stories, create computer code, and answer search queries in natural language. Meanwhile, OpenAI's Dall-e can win art competitions with AI-generated art. AI also plays a larger role in gaming, with many titles supporting AI upsampling technologies like DLSS.


Original Submission

Related Stories

Robots Let ChatGPT Touch the Real World Thanks to Microsoft 15 comments

https://arstechnica.com/information-technology/2023/02/robots-let-chatgpt-touch-the-real-world-thanks-to-microsoft/

Last week, Microsoft researchers announced an experimental framework to control robots and drones using the language abilities of ChatGPT, a popular AI language model created by OpenAI. Using natural language commands, ChatGPT can write special code that controls robot movements. A human then views the results and adjusts as necessary until the task gets completed successfully.

The research arrived in a paper titled "ChatGPT for Robotics: Design Principles and Model Abilities," authored by Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor of the Microsoft Autonomous Systems and Robotics Group.

In a demonstration video, Microsoft shows robots—apparently controlled by code written by ChatGPT while following human instructions—using a robot arm to arrange blocks into a Microsoft logo, flying a drone to inspect the contents of a shelf, or finding objects using a robot with vision capabilities.

To get ChatGPT to interface with robotics, the researchers taught ChatGPT a custom robotics API. When given instructions like "pick up the ball," ChatGPT can generate robotics control code just as it would write a poem or complete an essay. After a human inspects and edits the code for accuracy and safety, the human operator can execute the task and evaluate its performance.

In this way, ChatGPT accelerates robotic control programming, but it's not an autonomous system. "We emphasize that the use of ChatGPT for robotics is not a fully automated process," reads the paper, "but rather acts as a tool to augment human capacity."

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Touché) by HiThere on Friday February 10, @02:14PM (23 children)

    by HiThere (866) on Friday February 10, @02:14PM (#1291087) Journal

    I believe the summary and the quoted statement implicitly. But the same claim could be made for a CAD program.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 4, Insightful) by stormreaver on Friday February 10, @02:53PM (18 children)

      by stormreaver (5101) on Friday February 10, @02:53PM (#1291092)

      But the same claim could be made for a CAD program.

      Quite right.

      What happened: A tool aided human chip designers by collating existing knowledge into a set of helpful possibilities. The chip designers then took was was useful, and discarded what was not useful.
      What did NOT happen: "Software, I need a new chip design which maximizes power and efficiency in as small a package as possible. GO!" And then the software designed a chip.

      These "AI" fools need to put down the Crack pipe and stop with the sensationalism.

      • (Score: 2) by Freeman on Friday February 10, @03:05PM

        by Freeman (732) Subscriber Badge on Friday February 10, @03:05PM (#1291097) Journal

        Even, if the latter were to be a thing. I posit that it would be pulling on data it had been trained. It wouldn't be able to create entirely new concepts. Maybe novel applications that someone just hadn't thought about putting together, but the AI would just be using data it had been trained on.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 3, Interesting) by crafoo on Friday February 10, @04:01PM

        by crafoo (6639) on Friday February 10, @04:01PM (#1291105)

        You say it like it's not amazing. But it is.

      • (Score: 5, Interesting) by SunTzuWarmaster on Friday February 10, @04:13PM (15 children)

        by SunTzuWarmaster (3971) on Friday February 10, @04:13PM (#1291110)

        I am an AI Scientist with regular media engagements and it isn't exactly clear how to put down the crack pipe of sensationalism. In personal conversations, rather than say "AI is XXX", I just say "Computers". As in "I'm a computer researcher" and "I have a PhD in computer engineering" and "I do research to make computers better at doing things". I am also a reasonably top-ranked AI artist.

        That said - the processes behind what is modern [computer-aided-XYZ] are somewhat different than previously. Midjourney/DALLE is not Photoshop/GIMP. DSO.ai is not autoCAD. chatGPT is not Word/GDocs/autocomplete. They are somewhat new tools and they require a different set of skills and thinking to use; none deny this.

        How would you go about describing this [new set of computer tools, replacing existing computer tools, requiring somewhat different skillsets, from published research at AI and machine learning conferences] ? I'm honestly open to suggestions and will happily work actively to popularize a good one - as the existing ones are somewhat bad. "AI writes the new GameOfThrones plot" simply isn't accurate, but "new tools, which including AI items, are being used by script-writers, who had to learn the new tool, which uses pretrained models on cloud-hosted computers to aid in their writing of the new GameOfThrones plot", while more accurate, doesn't particularly convey the message. "Computers win art contest" is also not accurate - it won in the digital art category to begin with, for Gods' sake - *obviously* computers were going to be used.

        Real question - how would you describe the situation so as to be less sensationalist? You use the words - if I like them - I'll start using and publicizing them.

        • (Score: 3, Interesting) by PiMuNu on Friday February 10, @07:13PM (5 children)

          by PiMuNu (3823) on Friday February 10, @07:13PM (#1291136)

          The very letters AI are incorrect. These algorithms are simply *not* intelligent. Any suggestion that they are is wrong.

          > "AI writes the new GameOfThrones plot" simply isn't accurate

          How about "Novel algorithm used to write the new GameOfThrones plot" or "New GameOfThrones plot written using computerised amalgamation of existing scripts" or "New GameOfThrones plot developed using neural network"

          Machine Learning is at least a start, but actually when you dig, it turns out that "Machine Learning" actually refers more-or-less to any computer program that has internal state (which describes approximately all programs ever).

          • (Score: 3, Insightful) by SunTzuWarmaster on Friday February 10, @07:59PM (2 children)

            by SunTzuWarmaster (3971) on Friday February 10, @07:59PM (#1291143)

            My phrase for that phenomenon is "AI exists in presentations - Machine Learning exists in code." I then get some amount of mileage with the joke "Since this is a presentation, there is no machine learning :)."

            To transform the title here:
            "An AI 'Engineer' Has Now Designed 100 Chips"
            It isn't exactly accurate to say 'novel algorithm' (not novel, just novel application) or 'amalgamation of chips' (something new was produced, it isn't an amalgamation) or 'developed using ANN' (ANN would be part of it - but probably a GAN, CNN, LTSM, and other tech in addition to reality-checkers and other post-generation tools). Its also not exactly 'novel' if it just ran for the 100th time...

            As such, the "An AI 'Engineer' Has Now Designed 100 Chips" is actually a bit more accurate than the suggestions - as much as we both don't like it.

            • (Score: 3, Interesting) by bzipitidoo on Saturday February 11, @12:39AM

              by bzipitidoo (4388) Subscriber Badge on Saturday February 11, @12:39AM (#1291186) Journal

              Intelligence is on a spectrum. We can't draw a sharp line and say, "this is just a machine, doing very mechanical, deterministic operations" but over here is another device that is intelligent. It might be tempting to make that sharp dividing line fall between Von Neumann machines and neural networks, but this would be wrong. Another tempting line is between Turing complete and not Turing complete, but a problem we have with that is that it takes shockingly little to be Turing complete. Have to step down to real simple cellular automatons before one sees examples of both kinds. Same with logic gates. NAND and NOR are Turing complete, AND, OR, and XOR are not. Maybe that's a problem with our expectations.

              Look at animal intelligence. It's generally recognized that predatory animals are smarter because they have to be smarter to catch prey. Drill down to lower and lower animals, and what do we get? Insects, yes, they have intelligence. How about jellyfish? Yes! Amoeba? Yes! So it seems we could draw a sharp line, but it will be nearly uselessly low. Lumping amoeba in with people just isn't that useful. Finer granularity is called for.

            • (Score: 2) by PiMuNu on Saturday February 11, @03:55PM

              by PiMuNu (3823) on Saturday February 11, @03:55PM (#1291280)

              > It isn't exactly accurate to say 'novel algorithm

              You are correct. The title should be "algorithm used in design of new chips"

              It doesn't sound very glamorous, but then it really isn't very glamorous and that is the point.

          • (Score: 2) by darkfeline on Friday February 10, @09:26PM (1 child)

            by darkfeline (1030) on Friday February 10, @09:26PM (#1291152) Homepage

            You'd better define intelligence first, because most humans aren't intelligent either, so it does not rule out "AI" being smarter than the average human.

            --
            Join the SDF Public Access UNIX System today!
            • (Score: 0) by Anonymous Coward on Saturday February 11, @02:57PM

              by Anonymous Coward on Saturday February 11, @02:57PM (#1291265)
              Intelligence is when even your mistakes make sense... 😉

              But yeah lots of stupid humans around. And many "Florida Man" mistakes don't make sense...
        • (Score: 2) by gawdonblue on Friday February 10, @08:44PM

          by gawdonblue (412) on Friday February 10, @08:44PM (#1291149)

          Why ask us?

          Instead: "Hey ChatGPT - describe this thing I do."

          Simples :)

        • (Score: 2) by stormreaver on Saturday February 11, @12:25AM

          by stormreaver (5101) on Saturday February 11, @12:25AM (#1291183)

          How would you go about describing this [new set of computer tools, replacing existing computer tools, requiring somewhat different skillsets, from published research at AI and machine learning conferences] ?

          I would start with exactly what you outlined. Emphasize that these programs are just new tools, and completely downplay the entire notion that they are, in any way, shape, or form some kind of artificial intelligence. I don't expect that to go well, though, in a group of people already convinced that "AI" is real.

          Midjourney/DALLE is not Photoshop/GIMP. DSO.ai is not autoCAD. chatGPT is not Word/GDocs/autocomplete.

          Midjourney/DALLE is to graphics arts as MacPaint was to simple MS-DOS drawing programs. It required a new way of thinking about computer art, but it wasn't going to replace human artistic expression. All of those programs will always be limited to reproducing thin variations of well-established norms, but will never be capable of creating brand new inspirations. Of course, an argument can be made that all new inspirations are merely a shortcut through what would inevitably be discovered through systematic iteration -- the million monkeys on a typewriter -- at which computer systems excel. But that would still require human intervention to find the full works of Shakespeare in an infinite mound of insipid garbage. Music is mathematically produceable, for example, but the vast majority of mathematical music is trash. The same is true of these new tools. Without skilled/talented human direction and intervention, the output is completely useless.

        • (Score: 2) by The Vocal Minority on Saturday February 11, @04:09AM (6 children)

          by The Vocal Minority (2765) on Saturday February 11, @04:09AM (#1291216) Journal

          AI Assisted [blah]?
          AI Aided? AIAD instead of CAD :)

          chatGPT is difficult to classify in this way because it it useful for a whole range of tasks, which I think in the future will be integrated more systematically into word processors etc. (privacy concerns aside), but perhaps "AI Assisted Writing" would cover it.

    • (Score: 0) by Anonymous Coward on Friday February 10, @04:45PM (3 children)

      by Anonymous Coward on Friday February 10, @04:45PM (#1291115)

      Also isn't chip-layout today mostly a matter of computer-assisted-tetris to get all the tiny little transistors in place. It's not like they are being done with pen and paper by hand any more.

      • (Score: 2) by RS3 on Friday February 10, @07:34PM (2 children)

        by RS3 (6367) on Friday February 10, @07:34PM (#1291141)

        On the larger scale, in circuit board design we've used "auto-routing" for at least 30 years. I don't do that work, but have been generally involved. Depending on the specifics of the circuit, a human may need to say "these things need to be here, and these over there, but overall, autorouter, have at it".

        The autorouter then moves chips around, optimizing, as you can imagine, for minimal "vias" (plated through holes / jumpers). (That then gets into issues of wire (board "trace") length, signal / pulse timings, "race conditions", etc., so then circuit simulation sees the problems and coaches the autorouter to move things and try again, or just intentionally make some traces longer if that'll fix the problem.)

        • (Score: 0) by Anonymous Coward on Friday February 10, @08:20PM

          by Anonymous Coward on Friday February 10, @08:20PM (#1291146)

          There must be more to it than optimizing layout on PCB. That shit seems like a classical search will produce whatever targeted penalty minimization. The (limited) uses of AI that I have seen are feature detection in images - i.e. creating a horrifically complicated set of filters and switches that best classify objects from a list. Not sure how this would apply to "designing 100 chips".

        • (Score: 3, Interesting) by corey on Friday February 10, @10:48PM

          by corey (2202) on Friday February 10, @10:48PM (#1291173)

          I do this but I don’t use autorouter normally. I still have it in my brain that the autorouters are still not much good. Just like the Altium signal analysis subsystem (which we don’t use either). Maybe I should try it.

          Very much sounds like AI but it’s just a minimisation and optimisation problem, not a learning algorithm.

  • (Score: 4, Insightful) by quietus on Friday February 10, @03:55PM (7 children)

    by quietus (6328) on Friday February 10, @03:55PM (#1291103) Journal

    The hardware requirements for Bloom, an open source competitor for ChatGPT, are 384 GPUs each with 80GB memory for model training. To run the model, 80 GPUs, each with (again) 80GB of memory are required. What that means is that you can get a modern machine learning system for roughly half a million dollars/euros (source [notion.site]).

    That makes these things affordable for even small and medium businesses, and that is the real sea change here.

    • (Score: 2) by DannyB on Friday February 10, @04:09PM (4 children)

      by DannyB (5839) Subscriber Badge on Friday February 10, @04:09PM (#1291106) Journal

      The price is likely to only go down.

      Especially if machines can design chips with better PPA.

      power, performance, and area, or PPA as it's often called

      Once upon a time a mainframe with a megabyte of memory cost well over a million dollars. Then minicomputers were in the low to mid hundreds of thousands. And then came microcomputers in the mid 1970s.

      GPUs may not even be optimal hardware for these AI implementations. GPUs may simply be the most convenient fast hardware we have at the moment. Why did Google design TPUs (Tensor processing units) -- special hardware optimized just for AI learning models?

      My point: expect more powerful systems and lower prices.

      Maybe like "personal computers" we will end up with "personal AIs".

      --
      How often should I have my memory checked? I used to know but...
      • (Score: 2) by optotronic on Saturday February 11, @03:24AM (3 children)

        by optotronic (4285) on Saturday February 11, @03:24AM (#1291210)

        Maybe like "personal computers" we will end up with "personal AIs".

        Personal AIs, at least in the form of advanced digital personal assistants, have been in science fiction for decades (but don't ask me where; it's been too long). Now it seems that if or when they become available broadly it would only be as a service, paid by subscription, or with ads, or by the provider monetizing your personal data. Personally I would be happy to pay a reasonable amount to own my own advanced personal digital assistant (APDA?) if it provided real value and didn't share my details with anyone. I'm not sure what it would take for me to trust it, though.

        • (Score: 2) by DannyB on Monday February 13, @05:36PM (2 children)

          by DannyB (5839) Subscriber Badge on Monday February 13, @05:36PM (#1291579) Journal

          That is like assuming that Mainframe computers will be available only as a service by subscription. Eg, The Source, then CompuServe, then AOL, Prodigy. Etc.

          It was for a time, simply because "real" computers were more powerful than those pesky "toy" microcomputers. But the microcomputers grew up into real computers. But they rebelled. Wore blue jeans instead of button down suits. Rows of neat button down programmers with identical cookie cutter haircuts being creative. At least they were color coordinated in the blue suits and white shirts, each exactly like their coworkers.

          --
          How often should I have my memory checked? I used to know but...
          • (Score: 2) by optotronic on Saturday February 18, @02:53AM (1 child)

            by optotronic (4285) on Saturday February 18, @02:53AM (#1292315)

            I don't doubt that the hardware and technology will be affordable to a wide variety of people. However, I expect the providers to continue their current path of preferring to sell subscriptions for a fee or in exchange for personal data, or for the right to spam your life.

            I don't see how power will be returned to the people as happened with personal computers. Maybe the internet, in terms of dependence on external data, has made that impossible. I would be thrilled to be wrong.

            • (Score: 2) by DannyB on Sunday February 19, @09:03PM

              by DannyB (5839) Subscriber Badge on Sunday February 19, @09:03PM (#1292611) Journal

              That's like saying that the software technology will be locked up by Microsoft and you'll have no choice on what OS you can run on your machine.

              Maybe the owners of the current AI software will lock it up. However others will develop similar software and not lock it up. It is inevitable.

              --
              How often should I have my memory checked? I used to know but...
    • (Score: 0) by Anonymous Coward on Friday February 10, @04:11PM

      by Anonymous Coward on Friday February 10, @04:11PM (#1291108)

      Something that can garble out infinite amounts of corporate messaging - which is precisely what these regurgitation robots are trained for - only has novelty value. After 5 minutes, you'll be looking for the off switch. Siri, how do you turn yourself off?

    • (Score: 1, Insightful) by Anonymous Coward on Saturday February 11, @03:00PM

      by Anonymous Coward on Saturday February 11, @03:00PM (#1291266)

      That makes these things affordable for even small and medium businesses, and that is the real sea change here.

      What are the data requirements? How many samples does it need to "learn"?

  • (Score: 3, Interesting) by crafoo on Friday February 10, @04:09PM

    by crafoo (6639) on Friday February 10, @04:09PM (#1291107)

    You guys might know Synopsis, and I certainly do. They're a software shop selling licenses and support to all major engineering tools like whatever Pro-Engineer/Creo is called now, all those electronic design and simulation packages, very niche physics simulation software for various E&M, optics, fluids, solids ray tracing, FMEA, etc. They buy these companies up and invest into their development, support, and of course marketing.

    I can see why they would be highly motivated to invest in any AI tool they could offer customers. Optimizing optical systems, for instance, has always been a huge time-sink. I'm sure AI could really help us humans when we design vast, complex systems with huge parameter design spaces.

    Pairing physical simulation software with an AI system, and allowing it to train itself and work with an adversarial AI - peanut butter and jelly.

(1)