from the lets-MILK-it-for-all-it-is-worth dept.
In today's computer chips, memory management is based on what computer scientists call the principle of locality: If a program needs a chunk of data stored at some memory location, it probably needs the neighboring chunks as well. But that assumption breaks down in the age of big data, now that computer programs more frequently act on just a few data items scattered arbitrarily across huge data sets. Since fetching data from their main memory banks is the major performance bottleneck in today's chips, having to fetch it more frequently can dramatically slow program execution.
This week, at the International Conference on Parallel Architectures and Compilation Techniques, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) are presenting a new programming language, called Milk, that lets application developers manage memory more efficiently in programs that deal with scattered data points in large data sets. In tests on several common algorithms, programs written in the new language were four times as fast as those written in existing languages. But the researchers believe that further work will yield even larger gains.
http://phys.org/news/2016-09-language-fourfold-speedups-problems-common.html
[Source]: Faster parallel computing
(Score: 2) by bob_super on Tuesday September 13 2016, @06:57PM
So... to go 4x faster, we should reduce cache line sizes and not bother with wide memory buses?
(Score: 2) by davester666 on Wednesday September 14 2016, @05:01AM
So, the problems happen 4 times faster? Or 4 times more often?
Why would anyone use this language?
(Score: 3, Insightful) by JoeMerchant on Tuesday September 13 2016, @07:25PM
Fortran delivered all kinds of speedups and optimizations for common problems in Assembly language.
C++ and its APIs delivered tremendous efficiencies in development of windowed, GUI applications, network client-server applications, and other common problems that the APIs addressed.
Script flavors du-jour deliver development time efficiencies with horrifically inefficient pre-packaged solutions for common problems. AND, they provide a great baseline for the next "optimized super-language" to tout efficiency gains over.
🌻🌻🌻 [google.com]
(Score: 4, Informative) by maxwell demon on Tuesday September 13 2016, @07:40PM
Maybe you should have first RTFA. The title is utterly misleading; this is not a new language, it's an extension to OpenMP. If I understand correctly, basically it coordinates the memory accesses from different cores.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by bob_super on Tuesday September 13 2016, @08:07PM
> coordinates the memory accesses from different cores.
That's my take from TFA, which puzzles me: does it mean the core stays stalled until enough other cores are requesting data?
I wanna see that decision state machine.
(Score: 1) by pasky on Tuesday September 13 2016, @08:27PM
That may be fine when you are scheduling many threads on a single core and you want to optimize throughput, not latency.
(Score: 2) by opinionated_science on Tuesday September 13 2016, @08:21PM
an example would be nice to show where it might benefit. OpenMP is not too hard, but is still quite sensitive to deployment.
(Score: 0) by Anonymous Coward on Tuesday September 13 2016, @09:19PM
Thank God! I though it was going to be another of those "see how great Rust is?" type of articles.
(Score: 0) by Anonymous Coward on Wednesday September 14 2016, @03:55PM
The Rust article was elided.
(Score: 2, Disagree) by JoeMerchant on Tuesday September 13 2016, @11:22PM
The snarky answer is that clearly either: a) the editor did not read the article, or b) they're mentally challenged when it comes to writing appropriate headlines.
I do see from the summary that the meat of the news is regarding sparse table accesses and how it relates to paged memory fetching (poorly, for efficiency).
What I fail to see is why a new language would be necessary or even desired in association with an advanced memory management technique - it would seem to be something better implemented in an API provided for an existing language, or three.
🌻🌻🌻 [google.com]
(Score: 5, Informative) by Post-Nihilist on Tuesday September 13 2016, @09:34PM
There is a full text link there:
http://dl.acm.org/citation.cfm?id=2967948 [acm.org]
I have not yet read it , but it looks more substantial than the press office communiqué
Be like us, be different, be a nihilist!!!
(Score: 2) by opinionated_science on Wednesday September 14 2016, @12:45AM
thanks. had a quick read and passed it on to colleagues in the graph area...
(Score: 3, Informative) by bradley13 on Wednesday September 14 2016, @06:26AM
Thanks for the link. Just the abstract already clarifies a lot. Here is the key sentence:
"In this paper, we introduce milk — a C/C++ language extension that allows programmers to annotate memorybound loops concisely."
So it's not only not a programming language, it's just something that allows the programmer to give the compiler hints. Hints that the compiler then uses to optimize memory access, especially in the case of parallelism. "Milk" is also the name of their prototype compiler that processes these hints.
Everyone is somebody else's weirdo.