Virtually every processor you see is based on the same basic (Von Neumann) computing model: they're designed to access large chunks of sequential data and fill their caches as often as possible. This isn't the quickest way to accomplish every task, however, and the American military wants to explore an entirely different kind of chip. DARPA is spending $80 million to fund the development of the world's first graph analytic processor. The HIVE (Hierarchical Identify Verify Exploit) accesses random, 8-byte data points from the system's global memory, crunching each of those points individually. That's a much faster approach for handling large data, which frequently involves many relationships between info sets. It's also extremely scalable, so you can use as many HIVE chips as you need to accomplish your goals.
The agency isn't alone in its work: Intel, Qualcomm and Northrop Grumman are involved, as are researchers at Georgia Tech and Pacific Northwest National Laboratory.
Source: Engadget
Additional Resources:
(Score: 2) by ese002 on Tuesday June 13 2017, @11:16PM (2 children)
Not exactly. The point of caches is they should *miss* as infrequently as possible. That isn't the same as *filling* as infrequently as possible. The difference is prefetching. If the cache can be filled in the background before there is a need, the processor need not experience a miss and can run at the speed of the cache. This works pretty well for sequential access. Modern memory systems have high bandwidth but also high latency. If access is sequential then it is easy to predict which locations will be needed soon and have them in the caches before the cpu needs it. Sometimes the prediction will be wrong and those fetches will be wasted but that is far better than waiting for cpu to make its request and having it idle for dozens to hundreds of cycles waiting for the line to be fetched from DRAM.
Unfortunately, prefetch engines are generally pretty dumb and tend to fail when paired with more complex "random" looking patterns. Not only are the right locations not loaded in time, all the wrong locations being fetched actually interferes with the fetch of right pages.
What it looks like HIVE processor is doing is to punt cache management to software. Having software manage caches is not as efficient as a hardware prefetch but the software can know what it's access pattern is and thus do a better job or loading the right locations at the right time.
An alternative and higher performing solution is to build a more complex prefetch engine. If you can somehow teach the prefetch engine your access pattern, it might be able to keep the caches properly filled without losing any cpu cycles. It is not clear if this project intends to do anything like that.
(Score: 2) by FatPhil on Wednesday June 14 2017, @01:12PM
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 2) by FatPhil on Wednesday June 14 2017, @01:19PM
> Not exactly. The point of caches is they should *miss* as infrequently as possible.
Every miss leads to a fill. And every fill arises because of a miss. There is no distinction between a miss and a fill except the time delay between the two, they are just two ends of the came condition.
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves