Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday August 11 2015, @06:08PM   Printer-friendly
from the biggish-iron dept.

The Platform reports on IBM's updated POWER CPU roadmap. Next year's POWER8+ will add Nvidia's NVLink interconnect to boost bandwidth. POWER9 will move down to a 14nm process node around 2017, and POWER10 will move to 10nm around 2020. A 7nm POWER chip would likely appear around 2023 at the earliest:

The interesting thing about these roadmaps is that the Power8+ processor will come out next year and will have NVLink high-bandwidth interconnects just like the forthcoming "Pascal" GP100 Tesla coprocessor from Nvidia. With NVLink, Nvidia is putting up to four 20 GB/sec ports onto the GPU coprocessor to speed up data transfers between the high bandwidth memory on the GPU cards and to improve the performance of virtual addressing across those devices. With the addition of NVLink ports on the Power8+ processor, those creating hybrid systems will be able to implement virtual memory between the CPU and GPU in an NVLink cluster without having to resort to IBM's Coherent Accelerator Processor Interface (CAPI), which debuted with the Power8 chip last year and which offers similar coherence across a modified PCI-Express link.

[...] We think that IBM could be adding some form of high bandwidth memory to the Power9 chip package, particularly variants aimed at HPC and hyperscale workloads that are not intended for multi-processor systems. But IBM has said nothing about its plans to adopt 3D stacked memory on its processors thus far, even though it has done plenty of fundamental research in this area. We also wonder if IBM will use the process shrink to lower the power consumption of the Power9 chips and perhaps even simplify the cores now that it has officially designated GPUs and FPGAs are coprocessors for the Power line. (Why add vector units if you want to offload to GPUs and FPGAs?)

What is new and interesting on the above roadmap is confirmation that IBM is working on a Power10 processor, which is slated for around 2020 or so and which will be based on the 10 nanometer processes under development at Globalfoundries. With the Power10, as with the Power7, Power8, and Power9 before it, IBM is changing the chip microarchitecture and chip manufacturing process at the same time. This IBM roadmap above does not show a Power9+ or Power10+ kicker, but both could come to pass if the market demands some tweaks to the microarchitecture around half-way between those three-year gaps between Power generations.

IBM's POWER9 chips and Nvidia's Volta GPUs will be featured in Summit and Sierra, two upcoming U.S. Department of Energy supercomputers that will reach 100-300 petaflops.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by bzipitidoo on Tuesday August 11 2015, @09:32PM

    by bzipitidoo (4388) on Tuesday August 11 2015, @09:32PM (#221461) Journal

    In a server, what's a Graphics Processing Unit for? Not graphics. It's for massive parallel processing. The term is becoming a legacy of desktop computing. Would an end user even get these machines? Probably not. Just no reason to invest big $ in hardware designed for server and networking throughput and output when a single 100Mb Ethernet port is more than enough for a desktop, particularly if that desktop is going to be connected to the Internet via a typical crappy residential US broadband. But if they did, could they get a graphics intensive game up and running on them? I presume MS is still dominant enough that IBM wouldn't dare not have working Windows drivers for all the hardware in those servers, except that MS dropped support for the PowerPC. There are a few graphic intensive games for Linux. There's CAD and solid modeling and the like.

    So, maybe those cards should be called Parallel Processing Units. Then one starts to wonder, what is the CPU for? Modern systems offload a great deal of work onto subsystems. The SCSI, IDE, SATA, or SAS hardware runs the hard drives, probably an audio processor handles sound, unless they decided servers don't need that ability, and yet more specialized hardware handles the bus, doing DMA so that information is moved around faster than the CPU could manage if it had to do the work of copying memory, because it can copy only a few words at a time. The CPU gets demoted to more of a management role, using its multitasking and semaphore management abilities to run the OS while the PPUs do the bulk of the application work. But then, given that the PPU is parallel, shouldn't it also have specialized multitasking management abilities? Maybe the CPU is not needed for much more than running a hypervisor. Or perhaps its strong point is executing single threaded apps, stuff that can't be easily parallelized if at all. With multiple cores, the CPU can do MIMD (multiple instruction multiple data) while the PPUs can only do SIMD? If PPUs keep gaining capabilities, they might make the CPU obsolete, become the CPU themselves. Will be interesting to see where computers go next.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by mhajicek on Wednesday August 12 2015, @03:26AM

    by mhajicek (51) on Wednesday August 12 2015, @03:26AM (#221552)

    AFAIK, GPUs are limited in the kind of algorithms they can efficiently run. What I could use is a multi socket mobo that can support dissimilar CPUs so I could have a fast one with a few cores, and one or more slower ones with many cores each.

    --
    The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
    • (Score: 2) by LoRdTAW on Wednesday August 12 2015, @01:33PM

      by LoRdTAW (3755) on Wednesday August 12 2015, @01:33PM (#221672) Journal

      What I could use is a multi socket mobo that can support dissimilar CPUs so I could have a fast one with a few cores, and one or more slower ones with many cores each.

      You just described a motherboard with a quad core CPU and one or more graphics cards with one or more GPU's.

      GPU acceleration pretty much runs a small "kernel" on the GPU which is fed data, processes it and returns it. The current issue is getting that data from main memory, to the GPU memory and back again. APU's, HBM, nvlink and other technologies are there to reduce the latency and bandwidth issues by getting the GPU closer to main memory.

      • (Score: 2) by mhajicek on Wednesday August 12 2015, @04:22PM

        by mhajicek (51) on Wednesday August 12 2015, @04:22PM (#221744)

        If I were writing the software, sure, that could work. Since I'm "just" a user of the CADCAM the software I have no control over where the code executes, and it is currently not written to use CUDA or anything like that. If you can tell me a way to run a VM on the GPU I would be ever so grateful.

        --
        The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
        • (Score: 2) by LoRdTAW on Wednesday August 12 2015, @05:24PM

          by LoRdTAW (3755) on Wednesday August 12 2015, @05:24PM (#221771) Journal

          No way to ever run a VM on ay current GPU (if that question was serious).

          I'd complain to your CAD vendor and demand CUDA or better yet, OpenCL acceleration of your CAD package. Then again, they might be offering it but only through a very costly upgrade or rendering plugin.