Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday September 13 2016, @06:51PM   Printer-friendly
from the lets-MILK-it-for-all-it-is-worth dept.

In today's computer chips, memory management is based on what computer scientists call the principle of locality: If a program needs a chunk of data stored at some memory location, it probably needs the neighboring chunks as well. But that assumption breaks down in the age of big data, now that computer programs more frequently act on just a few data items scattered arbitrarily across huge data sets. Since fetching data from their main memory banks is the major performance bottleneck in today's chips, having to fetch it more frequently can dramatically slow program execution.

This week, at the International Conference on Parallel Architectures and Compilation Techniques, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) are presenting a new programming language, called Milk, that lets application developers manage memory more efficiently in programs that deal with scattered data points in large data sets. In tests on several common algorithms, programs written in the new language were four times as fast as those written in existing languages. But the researchers believe that further work will yield even larger gains.

http://phys.org/news/2016-09-language-fourfold-speedups-problems-common.html

[Source]: Faster parallel computing


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by maxwell demon on Tuesday September 13 2016, @07:40PM

    by maxwell demon (1608) on Tuesday September 13 2016, @07:40PM (#401455) Journal

    Maybe you should have first RTFA. The title is utterly misleading; this is not a new language, it's an extension to OpenMP. If I understand correctly, basically it coordinates the memory accesses from different cores.

    --
    The Tao of math: The numbers you can count are not the real numbers.
    Starting Score:    1  point
    Moderation   +2  
       Informative=2, Total=2
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2) by bob_super on Tuesday September 13 2016, @08:07PM

    by bob_super (1357) on Tuesday September 13 2016, @08:07PM (#401459)

    > coordinates the memory accesses from different cores.

    That's my take from TFA, which puzzles me: does it mean the core stays stalled until enough other cores are requesting data?
    I wanna see that decision state machine.

    • (Score: 1) by pasky on Tuesday September 13 2016, @08:27PM

      by pasky (1050) on Tuesday September 13 2016, @08:27PM (#401465)

      That may be fine when you are scheduling many threads on a single core and you want to optimize throughput, not latency.

  • (Score: 2) by opinionated_science on Tuesday September 13 2016, @08:21PM

    by opinionated_science (4031) on Tuesday September 13 2016, @08:21PM (#401463)

    an example would be nice to show where it might benefit. OpenMP is not too hard, but is still quite sensitive to deployment.

  • (Score: 0) by Anonymous Coward on Tuesday September 13 2016, @09:19PM

    by Anonymous Coward on Tuesday September 13 2016, @09:19PM (#401481)

    Thank God! I though it was going to be another of those "see how great Rust is?" type of articles.

    • (Score: 0) by Anonymous Coward on Wednesday September 14 2016, @03:55PM

      by Anonymous Coward on Wednesday September 14 2016, @03:55PM (#401855)

      The Rust article was elided.

  • (Score: 2, Disagree) by JoeMerchant on Tuesday September 13 2016, @11:22PM

    by JoeMerchant (3937) on Tuesday September 13 2016, @11:22PM (#401510)

    The snarky answer is that clearly either: a) the editor did not read the article, or b) they're mentally challenged when it comes to writing appropriate headlines.

    I do see from the summary that the meat of the news is regarding sparse table accesses and how it relates to paged memory fetching (poorly, for efficiency).

    What I fail to see is why a new language would be necessary or even desired in association with an advanced memory management technique - it would seem to be something better implemented in an API provided for an existing language, or three.

    --
    🌻🌻 [google.com]