Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Tuesday June 13 2017, @01:18AM   Printer-friendly
from the red-queen dept.

Virtually every processor you see is based on the same basic (Von Neumann) computing model: they're designed to access large chunks of sequential data and fill their caches as often as possible. This isn't the quickest way to accomplish every task, however, and the American military wants to explore an entirely different kind of chip. DARPA is spending $80 million to fund the development of the world's first graph analytic processor. The HIVE (Hierarchical Identify Verify Exploit) accesses random, 8-byte data points from the system's global memory, crunching each of those points individually. That's a much faster approach for handling large data, which frequently involves many relationships between info sets. It's also extremely scalable, so you can use as many HIVE chips as you need to accomplish your goals.

The agency isn't alone in its work: Intel, Qualcomm and Northrop Grumman are involved, as are researchers at Georgia Tech and Pacific Northwest National Laboratory.

Source: Engadget

Additional Resources:


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by ese002 on Tuesday June 13 2017, @11:16PM (2 children)

    by ese002 (5306) on Tuesday June 13 2017, @11:16PM (#525144)

    designed to access large chunks of sequential data and fill their caches as often as possible

    I disagree. The whole point of caches is that they're filled as infrequently as possible, as access within the cache is fast, but replacing data in the cache is expensive.

    Not exactly. The point of caches is they should *miss* as infrequently as possible. That isn't the same as *filling* as infrequently as possible. The difference is prefetching. If the cache can be filled in the background before there is a need, the processor need not experience a miss and can run at the speed of the cache. This works pretty well for sequential access. Modern memory systems have high bandwidth but also high latency. If access is sequential then it is easy to predict which locations will be needed soon and have them in the caches before the cpu needs it. Sometimes the prediction will be wrong and those fetches will be wasted but that is far better than waiting for cpu to make its request and having it idle for dozens to hundreds of cycles waiting for the line to be fetched from DRAM.

    Unfortunately, prefetch engines are generally pretty dumb and tend to fail when paired with more complex "random" looking patterns. Not only are the right locations not loaded in time, all the wrong locations being fetched actually interferes with the fetch of right pages.

    What it looks like HIVE processor is doing is to punt cache management to software. Having software manage caches is not as efficient as a hardware prefetch but the software can know what it's access pattern is and thus do a better job or loading the right locations at the right time.

    An alternative and higher performing solution is to build a more complex prefetch engine. If you can somehow teach the prefetch engine your access pattern, it might be able to keep the caches properly filled without losing any cpu cycles. It is not clear if this project intends to do anything like that.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by FatPhil on Wednesday June 14 2017, @01:12PM

    by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Wednesday June 14 2017, @01:12PM (#525403) Homepage
    That's not a cache. I was working on multi-core DSPs where each core had their own local memory that they had immediate access to, which could be block-filled and block-flushed quickly, long before caches were common on desktop machines. But that's not a cache. It's way cheaper than a cache, not needing to predict access or handle unpredictable access patterns, it's designed for stream/block processing. I don't think that these two memory access philosophies should be conflated with each other.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 2) by FatPhil on Wednesday June 14 2017, @01:19PM

    by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Wednesday June 14 2017, @01:19PM (#525405) Homepage
    >> The whole point of caches is that they're filled as infrequently as possible, ...

    > Not exactly. The point of caches is they should *miss* as infrequently as possible.

    Every miss leads to a fill. And every fill arises because of a miss. There is no distinction between a miss and a fill except the time delay between the two, they are just two ends of the came condition.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves