Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by martyb on Tuesday July 27 2021, @06:54AM   Printer-friendly

Will Approximation Drive Post-Moore's Law HPC Gains?:

“Hardware-based improvements are going to get more and more difficult,” said Neil Thompson, an innovation scholar at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). [...] Thompson, speaking at Supercomputing Frontiers Europe 2021, likely wasn’t wrong: the proximate death of Moore’s law has been a hot topic in the HPC community for a long time.

[...] Thompson opened with a graph of computing power utilized by the National Oceanic and Atmospheric Administration (NOAA) over time. “Since the 1950s, there has been about a one trillion-fold increase in the amount of computing power being used in these models,” he said. But there was a problem: tracking a weather forecasting metric called mean absolute error (“When you make a prediction, how far off are you on that prediction?”), Thompson pointed out that “you actually need exponentially more computing power to get that [improved] performance.” Without those exponential gains in computing power, the steady gains in accuracy would slow, as well.

Enter, of course, Moore’s law, and the flattening of CPU clock frequencies in the mid-2000s. “But then we have this division, right?” Thompson said. “We start getting into multicore chips, and we’re starting to get computing power in that very specific way, which is not as useful unless you have that amount of parallelism.” Separating out parallelism, he explained, progress had dramatically slowed. “This might worry us if we want to, say, improve weather prediction at the same speed going forward,” he said.

So in 2020, Thompson and others wrote a paper examining ways to improve performance over time in a post-Moore’s law world. The authors landed on three main categories of promise: software-level improvements; algorithmic improvements; and new hardware architectures.

This third category, Thompson said, is experiencing the biggest moment right now, with GPUs and FPGAs exploding in the HPC scene and ever more tailor-made chips emerging. Just five years ago, only four percent of advanced computing users used specialized chips; now, Thompson said, it was 11 percent, and in five more years, it would be 17 percent. But over time, he cautioned, gains from specialized hardware would encounter similar problems to those currently faced by traditional hardware, leaving researchers looking for yet more avenues to improve performance.

[...] The way past these mathematical limits in algorithm optimization, Thompson explained, was through approximation. He brought back the graph of algorithm improvement over time, adding in approximate algorithms – one 100 percent off, one ten percent off. “If you are willing to accept a ten percent approximation to this problem,” he said, you could get enormous jumps, improving performance by a factor of 32. “We are in the process of analyzing this data right now, but I think what you can already see here is that these approximate algorithms are in fact giving us very very substantial gains.”

Thompson presented another graph, this time charting the balance of approximate versus exact improvements in algorithms over time. “In the 1940s,” he said, “almost all of the improvements that people are making are exact improvements – meaning they’re solving the exact problem. … But you can see that as we approach these later decades, and many of the exact algorithms are starting to become already completely solved in an optimal way … approximate algorithms are becoming more and more important as the way that we are advancing algorithms.”

Journal Reference:
Charles E. Leiserson, Neil C. Thompson, Joel S. Emer, et al. There’s plenty of room at the Top: What will drive computer performance after Moore’s law? [$], Science (DOI: 10.1126/science.aam9744)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday July 28 2021, @12:17AM (1 child)

    by Anonymous Coward on Wednesday July 28 2021, @12:17AM (#1160510)

    That really depends on what you are doing. For example, "AI" work is commonly done in half precision because speed is more important than absolute accuracy. For a long time, general calculations on 32 bits was good enough even with quadruple precision and double-double readily available. For some problems, any error is unacceptable; for other problems, having a error of 10% or larger at the end is perfectly acceptable.

  • (Score: 2) by sjames on Wednesday July 28 2021, @02:09AM

    by sjames (2882) on Wednesday July 28 2021, @02:09AM (#1160534) Journal

    As I said, some models work with reduced precision. I have also seen models that sometimes terminate with numerical instability even with double or quad.