Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Tuesday June 13 2017, @01:18AM   Printer-friendly
from the red-queen dept.

Virtually every processor you see is based on the same basic (Von Neumann) computing model: they're designed to access large chunks of sequential data and fill their caches as often as possible. This isn't the quickest way to accomplish every task, however, and the American military wants to explore an entirely different kind of chip. DARPA is spending $80 million to fund the development of the world's first graph analytic processor. The HIVE (Hierarchical Identify Verify Exploit) accesses random, 8-byte data points from the system's global memory, crunching each of those points individually. That's a much faster approach for handling large data, which frequently involves many relationships between info sets. It's also extremely scalable, so you can use as many HIVE chips as you need to accomplish your goals.

The agency isn't alone in its work: Intel, Qualcomm and Northrop Grumman are involved, as are researchers at Georgia Tech and Pacific Northwest National Laboratory.

Source: Engadget

Additional Resources:


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by Runaway1956 on Tuesday June 13 2017, @01:53AM (8 children)

    by Runaway1956 (2926) Subscriber Badge on Tuesday June 13 2017, @01:53AM (#524758) Journal

    The idea of a bunch of new backdoors makes me want one of these CPU's - NOT!!

    • (Score: 0) by Anonymous Coward on Tuesday June 13 2017, @02:55AM

      by Anonymous Coward on Tuesday June 13 2017, @02:55AM (#524769)

      You're nominated to join the team as the designated whistleblower who checks for backdoors. Now take these coupons for discount hookers and get lost until your report is due. The young people have innovation to do.

    • (Score: 2) by Lagg on Tuesday June 13 2017, @03:01AM (4 children)

      by Lagg (105) on Tuesday June 13 2017, @03:01AM (#524771) Homepage Journal

      I cannot blame anyone whatsoever for whatever conspiracy theories they may have about surveillance at this point. So not going to try to address that. But it is a shame that this is something we have to give a shit about in the first place. The concept in itself seems pretty interesting and yet would have a logical implementation. Just one big ol' fat multidimensional array

      This non-von-Neumann approach allows one big map that can be accessed by many processors at the same time, each using its own local scratch-pad memory while simultaneously performing scatter-and-gather operations across global memory.

      --
      http://lagg.me [lagg.me] 🗿
      • (Score: 0) by Anonymous Coward on Tuesday June 13 2017, @03:52AM (3 children)

        by Anonymous Coward on Tuesday June 13 2017, @03:52AM (#524781)

        What happened to publicly funded work being public IP? :)

        (I know, I know, that has never REALLY been true, but for paradigm shifting publically-funded technology, it really should be!)

        • (Score: 1) by anubi on Tuesday June 13 2017, @05:30AM (2 children)

          by anubi (2828) on Tuesday June 13 2017, @05:30AM (#524800) Journal

          Ummm... I believe that is exactly what this is.... a "line item" crafted to justify a transfer of public funds to private entities.... in a manner so that the public funds it, but the results are private.

          --
          "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
          • (Score: 2) by bzipitidoo on Tuesday June 13 2017, @02:10PM (1 child)

            by bzipitidoo (4388) on Tuesday June 13 2017, @02:10PM (#524914) Journal

            It would be better to run this project free of interference from the military boys. But it's much harder to get funding that way. So scare the crap out of Congress and the bureaucrats over a "CPU gap" with those Commies in China. Then work to stop the paranoid from pulling a " secrets man was not meant to know" or the contemptuous anti-science types from declaring the project a failure and another example of govt waste, or the impatient "Rome was too built in a day" from deciding progress isn't fast enough and killing and burying everything.

            • (Score: 2, Insightful) by anubi on Tuesday June 13 2017, @02:47PM

              by anubi (2828) on Tuesday June 13 2017, @02:47PM (#524934) Journal

              Trouble is when Congress hires people, they hire hand-shakers and paper-signers, and pay them more money than the wildest imagination, while the technical people who actually do this kind of stuff are collecting unemployment benefits or early retirement.

              Passionate people don't seem to last long in the typical micromanaged military industrial complex environment because we seem to be hired only for our paper. No one will listen to us. Geez, just how long have we been telling the suit crowd not to mix code and data? ( embedded executables in a document? ) Tell the suit-guy your honest opinion and get to the top of the layoff list. And be told things about how to "butter up the suit-guy" and tell him what he wants to hear to remain "on the team".

              It seems to come down to forced decisions between personal ethics and disobedience to authority.

              --
              "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
    • (Score: 0) by Anonymous Coward on Tuesday June 13 2017, @04:09AM (1 child)

      by Anonymous Coward on Tuesday June 13 2017, @04:09AM (#524784)

      The idea of a bunch of new backdoors

      They even blatantly admit it, right there in the name: HIVE (Hierarchical Identify Verify Exploit)

      • (Score: 0) by Anonymous Coward on Tuesday June 13 2017, @08:54AM

        by Anonymous Coward on Tuesday June 13 2017, @08:54AM (#524836)

        I guess an AI running on that architecture will then be a hive mind? :-)

  • (Score: 0) by Anonymous Coward on Tuesday June 13 2017, @03:28AM (4 children)

    by Anonymous Coward on Tuesday June 13 2017, @03:28AM (#524777)

    I'm trying to figure out how this will be used. The summary didn't help me very much which called for extraordinary efforts, so I read the fine article. And I'm still confused.

    Can one of our resident experts provide a car analogy?

    • (Score: 4, Informative) by Anonymous Coward on Tuesday June 13 2017, @03:36AM (3 children)

      by Anonymous Coward on Tuesday June 13 2017, @03:36AM (#524779)

      Instead of one big autobahn where everyone queues up and then drives very fast in the same direction, you pave the entire area and everyone drives directly from one point to another. Individual trips are slower because you have to stop at yield and stops signs to avoid collisions, but overall everyone gets where they are going much quicker because you avoid the traffic jams at the autobahn on and off ramps.
      Also it scales better because you can just keep paving more area.

      • (Score: 1) by anubi on Tuesday June 13 2017, @05:27AM

        by anubi (2828) on Tuesday June 13 2017, @05:27AM (#524799) Journal

        Kinda like unregulated airspace?

        --
        "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
      • (Score: 2) by inertnet on Tuesday June 13 2017, @08:51AM

        by inertnet (4071) on Tuesday June 13 2017, @08:51AM (#524834) Journal

        I guess this approach would consume much less power as well.

      • (Score: 0) by Anonymous Coward on Friday June 16 2017, @01:24PM

        by Anonymous Coward on Friday June 16 2017, @01:24PM (#526417)

        +4 nformative!!! Damn, I was going for funny.

  • (Score: 2, Interesting) by anubi on Tuesday June 13 2017, @05:26AM

    by anubi (2828) on Tuesday June 13 2017, @05:26AM (#524798) Journal

    Anyone know how this stacks up against the already well-known "Harvard" architecture, where code and data are in completely separate memory?

    --
    "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
  • (Score: 4, Insightful) by jdccdevel on Tuesday June 13 2017, @05:40AM (2 children)

    by jdccdevel (1329) on Tuesday June 13 2017, @05:40AM (#524801) Journal

    It sounds like they're talking about a more powerful general purpose version of GPGPU shaders (With their own cache and bunch of scratch-pad RAM) that operate on a new kind of main memory highly optimized for random access.

    Given how fast GPGPUs are at solving some problems, and how well shaders scale, it doesn't surprise me at all that someone is looking into making a more powerful version.

    From the article it sounds like they're planning on using this for large data analysis, so it's almost certainly going to be paired with a standard CPU in some sort of specialized supercomputer cluster.

    On the lower end, it'll probably look like a evolved version of a GPGPU card in a standard computer.

    If it takes off, I wouldn't be surprised to find the AI people really, really interested in this processor for neural networks and computer vision type problems amongst other things.

    • (Score: 2) by c0lo on Tuesday June 13 2017, @06:50AM

      by c0lo (156) Subscriber Badge on Tuesday June 13 2017, @06:50AM (#524816) Journal

      I wouldn't be surprised to find the AI people really, really interested in this processor for neural networks and computer vision

      American Military Backs an Entirely New Kind of Processor

      Please wake me up when the military fronts those Neural Net CPUs [wikia.com].
      Better still... please don't.

      See also AI accelerator [wikipedia.org].

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 2) by kaszz on Tuesday June 13 2017, @01:01PM

      by kaszz (4211) on Tuesday June 13 2017, @01:01PM (#524887) Journal

      It's a processor for using graph theory [wikipedia.org] to correlate data in big data?

      Any chance to do a free version of this?

  • (Score: 3, Informative) by FatPhil on Tuesday June 13 2017, @03:16PM (3 children)

    by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Tuesday June 13 2017, @03:16PM (#524948) Homepage
    > designed to access large chunks of sequential data and fill their caches as often as possible

    I disagree. The whole point of caches is that they're filled as infrequently as possible, as access within the cache is fast, but replacing data in the cache is expensive.

    "All programming is an exercise in caching" - Terje Mathijsen (who has forgotten more about programming than most of us have ever learnt)
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2) by ese002 on Tuesday June 13 2017, @11:16PM (2 children)

      by ese002 (5306) on Tuesday June 13 2017, @11:16PM (#525144)

      designed to access large chunks of sequential data and fill their caches as often as possible

      I disagree. The whole point of caches is that they're filled as infrequently as possible, as access within the cache is fast, but replacing data in the cache is expensive.

      Not exactly. The point of caches is they should *miss* as infrequently as possible. That isn't the same as *filling* as infrequently as possible. The difference is prefetching. If the cache can be filled in the background before there is a need, the processor need not experience a miss and can run at the speed of the cache. This works pretty well for sequential access. Modern memory systems have high bandwidth but also high latency. If access is sequential then it is easy to predict which locations will be needed soon and have them in the caches before the cpu needs it. Sometimes the prediction will be wrong and those fetches will be wasted but that is far better than waiting for cpu to make its request and having it idle for dozens to hundreds of cycles waiting for the line to be fetched from DRAM.

      Unfortunately, prefetch engines are generally pretty dumb and tend to fail when paired with more complex "random" looking patterns. Not only are the right locations not loaded in time, all the wrong locations being fetched actually interferes with the fetch of right pages.

      What it looks like HIVE processor is doing is to punt cache management to software. Having software manage caches is not as efficient as a hardware prefetch but the software can know what it's access pattern is and thus do a better job or loading the right locations at the right time.

      An alternative and higher performing solution is to build a more complex prefetch engine. If you can somehow teach the prefetch engine your access pattern, it might be able to keep the caches properly filled without losing any cpu cycles. It is not clear if this project intends to do anything like that.

      • (Score: 2) by FatPhil on Wednesday June 14 2017, @01:12PM

        by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Wednesday June 14 2017, @01:12PM (#525403) Homepage
        That's not a cache. I was working on multi-core DSPs where each core had their own local memory that they had immediate access to, which could be block-filled and block-flushed quickly, long before caches were common on desktop machines. But that's not a cache. It's way cheaper than a cache, not needing to predict access or handle unpredictable access patterns, it's designed for stream/block processing. I don't think that these two memory access philosophies should be conflated with each other.
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 2) by FatPhil on Wednesday June 14 2017, @01:19PM

        by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Wednesday June 14 2017, @01:19PM (#525405) Homepage
        >> The whole point of caches is that they're filled as infrequently as possible, ...

        > Not exactly. The point of caches is they should *miss* as infrequently as possible.

        Every miss leads to a fill. And every fill arises because of a miss. There is no distinction between a miss and a fill except the time delay between the two, they are just two ends of the came condition.
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(1)