Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Sunday June 11, @05:00AM   Printer-friendly
from the cure-for-the-common-code? dept.

Google DeepMind's Game-Playing AI Just Found Another Way to Make Code Faster

Google DeepMind's game-playing AI just found another way to make code faster:

It has also found a way to speed up a key algorithm used in cryptography by 30%. These algorithms are among the most common building blocks in software. Small speed-ups can make a huge difference, cutting costs and saving energy.

"Moore's Law is coming to an end, where chips are approaching their fundamental physical limits," says Daniel Mankowitz, a research scientist at Google DeepMind. "We need to find new and innovative ways of optimizing computing."

"It's an interesting new approach," says Peter Sanders, who studies the design and implementation of efficient algorithms at the Karlsruhe Institute of Technology in Germany and who was not involved in the work. "Sorting is still one of the most widely used subroutines in computing," he says.

DeepMind published its results in Nature today. But the techniques that AlphaDev discovered are already being used by millions of software developers. In January 2022, DeepMind submitted its new sorting algorithms to the organization that manages C++, one of the most popular programming languages in the world, and after two months of rigorous independent vetting, AlphaDev's algorithms were added to the language. This was the first change to C++'s sorting algorithms in more than a decade and the first update ever to involve an algorithm discovered using AI.

DeepMind added its other new algorithms to Abseil, an open-source collection of prewritten C++ algorithms that can be used by anybody coding with C++. These cryptography algorithms compute numbers called hashes that can be used as unique IDs for any kind of data. DeepMind estimates that its new algorithms are now being used trillions of times a day.

[...] DeepMind chose to work with assembly, a programming language that can be used to give specific instructions for how to move numbers around on a computer chip. Few humans write in assembly; it is the language that code written in languages like C++ gets translated into before it is run. The advantage of assembly is that it allows algorithms to be broken down into fine-grained steps—a good starting point if you're looking for shortcuts.

Journal Reference:
Daniel J. Mankowitz, Andrea Michi, Anton Zhernov, et al. Faster sorting algorithms discovered using deep reinforcement learning [open], Nature (DOI: 10.1038/s41586-023-06004-9)


Original Submission

This discussion was created by martyb (76) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Informative) by looorg on Sunday June 11, @12:28PM (3 children)

    by looorg (578) on Sunday June 11, @12:28PM (#1310971)

    DeepMind chose to work with assembly, a programming language that can be used to give specific instructions for how to move numbers around on a computer chip. Few humans write in assembly; it is the language that code written in languages like C++ gets translated into before it is run. The advantage of assembly is that it allows algorithms to be broken down into fine-grained steps—a good starting point if you're looking for shortcuts.

    They make it sound like it's magic. Perhaps for modern programmers it is. My bold -- what? No. It's at best a step for some compilers. But it's not the end product that is run or executed. I guess this explains why there are so few assembly language programmers still around ... or confirmation that tech journalists are idiots. Those that can't teach, those that can't code become tech journalists?

    • (Score: 3, Interesting) by DadaDoofy on Sunday June 11, @01:13PM (2 children)

      by DadaDoofy (23827) on Sunday June 11, @01:13PM (#1310977)

      As long as Moore's law applied, assembly language programming was only necessary for a tiny fraction of applications. Now that it's close to exhausted, assembly language matters again. Especially if it can be optimized through AI.

      • (Score: 2) by looorg on Sunday June 11, @01:49PM (1 child)

        by looorg (578) on Sunday June 11, @01:49PM (#1310983)

        So make Assembly Language great again? Sure as long as there was just more cycles or RAM or whatever around to toss at it then you for the most parts didn't have to bother with optimization except in various fringe cases. But it's not like optimization have gone out of fashion, it just moved. From the human brain into the compiler. But perhaps now the age of the braindead programmer is over, not likely. I don't see the low code no code being made for or outputting optimized code anytime soon.

        But in the article tho. If they can't tell C++ from Assembly Language from Machine Code then perhaps they should just stick to C++ and call the libraries and functions created by others. You'll get some optimization, but it will be generic optimization.

        • (Score: 0) by Anonymous Coward on Monday June 12, @01:42AM

          by Anonymous Coward on Monday June 12, @01:42AM (#1311065)

          But perhaps now the age of the braindead programmer is over, not likely.

          Definitely not. The drive toward efficiency is alwats to commoditize labor into cheap easily replaceable parts. As long as programmers do not fit that description, there will be a push to break up their job into parts that are cheap and easily replaceable. I tend to have the same view as Byung-Chul Han that it's sort of out of our hands now. The market will optimize humans into extinction.

  • (Score: 5, Interesting) by bradley13 on Sunday June 11, @01:32PM (3 children)

    by bradley13 (3053) on Sunday June 11, @01:32PM (#1310980) Homepage Journal

    I only looked at one code example. The sorting code first identifies the number of elements to be sorted. If the number is small, then special, branchless code is called to perform the operation. A general sort is only performed for larger numbers of elements. For small numbers of elements, the special code ensures that the CPU pipeline never have to be emptied, which would cost a lot of CPU cycles.

    The bit I looked at was the specialized code for three elements. The AI found a way to save one assembly instruction (a move instruction) in two different places. I've written assembly, and I see no reason why a human wouldn't have found the same optimization, if they really cared about saving a fraction of a CPU cycle. They didn't see it, but then, saving a fraction of a CPU cycle isn't worth a huge level of effort.

    All of the AI code discoveries so far amount to very small snippets of code. They have as input code that works, and they either have to regurgitate it without hallucinating, or - in this case - make essentially trivial changes to it. Nothing to be impressed with...

    --
    Everyone is somebody else's weirdo.
    • (Score: 2) by looorg on Sunday June 11, @01:45PM (2 children)

      by looorg (578) on Sunday June 11, @01:45PM (#1310981)

      While perhaps not the great leap forward in optimization. If you just run the piece of code enough time it adds up. If you pay per transaction or cpu cycle or something it could matter. If you can say remove, or save, that one instruction and then you have it embedded in a Cobol program that process billions upon billions of transactions and you just saved a little fraction of that every run it will add up in the end in lower runtime costs.

      But by itself if run less, or once in a blue moon, then yes perhaps not so impressive. Also as noted I am fairly sure that a, competent, human (or programmer) would have noted it to if asked or if it had mattered. Perhaps by skipping the instruction of step you broke some code convention, not that the code wouldn't compile or work. But it wasn't problem.

      It's sort of how you are technically according to the manual do a lot of things in various programming languages but you can skip them cause the compiler will assume you do it or it just doesn't matter for the program. But it's proper according to the specs.

      • (Score: 5, Interesting) by gznork26 on Sunday June 11, @02:05PM

        by gznork26 (1159) on Sunday June 11, @02:05PM (#1310986) Homepage Journal

        Mixing languages can save a lot of execution time. Back in the 70s, when I had a contract writing COBOL at a steel mill, I was tasked with a data conversion that would take about 10 minutes of CPU time on the mainframe to run, which would have caused an unacceptable work-stoppage for other users. The specific problem was turning the variable-size nodes of each path through a custom hierarchical database into a serial record so that it could be read by an off-the-shelf auditing package. To do that in COBOL, as requested, meant redefining the 32K buffer into a byte array and filling it one byte at a time.

        I also knew FORTRAN and assembler, and had found a pre-written assembler routine on-site that would fill that 32K buffer much faster. Management refused to allow me to use code it already owned, so I ran many tests to try to save a small amount of time using COBOL tricks, until they finally relented and gave me permission to use that routine. With it installed, the conversion took 10 minutes of time-shared clock time.

        So I'm not surprised in the least that huge improvements can be made using assembler code where it matters.

      • (Score: 0) by Anonymous Coward on Monday June 12, @01:02AM

        by Anonymous Coward on Monday June 12, @01:02AM (#1311063)

        > If you just run the piece of code enough time it adds up.

        The place where this happens all the time is anything realtime. We supply some software that goes into high end driving simulators and it has to be fast--we haven't gotten into assembler optimization yet, but I believe that other parts of the system (done by others) have. All sorts of real time gaming too.

(1)