Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Tuesday April 14 2015, @01:54PM   Printer-friendly
from the sound-of-one-hand-clapping dept.

Fudzilla have 'obtained' a slide showing details of a forthcoming APU from AMD based on their new "Zen" architecture.

The highest end compute HSA part has up to 16 Zen x86 cores and supports 32 threads, or two threads per core. This is something we saw on Intel architectures for a while, and it seems to be working just fine. This will be the first exciting processor from the house of AMD in the server / HSA market in years, and in case AMD delivers it on time it might be a big break for the company.

Each Zen core gets 512 KB of L2 cache and each cluster or four Zen cores is sharing 8MB L3 cache. In case we are talking about a 16-core, 32-thread next generation Zen based x86 processor, the total amount of L2 cache gets to a whopping 8MB, backed by 32MB of L3 cache.

This new APU also comes with the Greenland Graphics and Multimedia Engine that comes with HBM memory on the side. The specs we saw indicate that there can be up to 16GB of HBM memory with 512GB/s speed packed on the interposer. This is definitely a lot of memory for an APU GPU, and it also comes with 1/2 rate double precision compute, enhanced ECC and RAS and HSA support.

The new APU sports quad-channel DDR4 support, with up to 256GB per channel at speeds of up to 3.2GHz. No information yet on which processor socket this APU will use, but it's safe to assume the DDR4 support alone will render it incompatible with all AMD's current motherboards. Support is also included for secure boot and AMD's encryption co-processor.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by VortexCortex on Tuesday April 14 2015, @11:42PM

    by VortexCortex (4067) on Tuesday April 14 2015, @11:42PM (#170635)

    Will it maintain your motorcycle?

    If you enjoyed this subject/response juxtaposition you may enjoy this Infographic: Computing Motorcycle Analogy. [imgur.com]

    Both TFS and this image were located and recommended to me by my MIIR (Machine Intelligence for Indexing and Recommendation) experiment. This AI engine is currently crunching 80% of its processing batches on two AMD A10-7850K APUs both with Dual Graphics. I've been impressed and plan on upgrading to newer APU models, but your mileage may vary as this application differs from most (a combination of highly concurrent and non-concurrent processes). Neural nets are embarrassingly parallel and thus fairly large ones run great as GPU shaders, meanwhile compute heavy memory hard tasks (such as updating sets of hash tables based on their buckets' contents) typically depend on higher single-thread GHz than parallelization. My needs fit this to a T. Eg: A quad core at 3GHz each (12Ghz combined) is faster than 16 cores at 2 GHz each (32GHz combined) in such a serial use case. The low power idle and high "CPU to GPU" bandwidth of such "dual-GPU" (APU + GPU card) setup yields one of the most economical bangs per buck I've found for my "bursty" high CPU <-> GPU applications. The bottleneck for this AI is its memory IO. Versus budgeting with Intel / Nvidia components, I got about twice the compute with AMD's prices. For purely compute-bound projects I've found the opposite is true.

    The answer is thus: Yes, if you give it the proper hardware and train it as you would any other offspring.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2