Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by on Monday May 01 2017, @12:01PM   Printer-friendly
from the faster-is-better dept.

SK Hynix is almost ready to produce GDDR6 memory with higher than expected per-pin bandwidth:

In a surprising move, SK Hynix has announced its first memory chips based on the yet-unpublished GDDR6 standard. The new DRAM devices for video cards have capacity of 8 Gb and run at 16 Gbps per pin data rate, which is significantly higher than both standard GDDR5 and Micron's unique GDDR5X format. SK Hynix plans to produce its GDDR6 ICs in volume by early 2018.

GDDR5 memory has been used for top-of-the-range video cards for over seven years, since summer 2008 to present. Throughout its active lifespan, GDDR5 increased its data rate by over two times, from 3.6 Gbps to 9 Gbps, whereas its per chip capacities increased by 16 times from 512 Mb to 8 Gb. In fact, numerous high-end graphics cards, such as NVIDIA's GeForce GTX 1060 and 1070, still rely on the GDDR5 technology, which is not going anywhere even after the launch of Micron's GDDR5X with up to 12 Gbps data rate per pin in 2016. As it appears, GDDR6 will be used for high-end graphics cards starting in 2018, just two years after the introduction of GDDR5X.

Previously: Samsung Announces Mass Production of HBM2 DRAM
DDR5 Standard to be Finalized by JEDEC in 2018


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by fyngyrz on Monday May 01 2017, @01:07PM (14 children)

    by fyngyrz (6567) on Monday May 01 2017, @01:07PM (#502246) Journal

    My first reaction was "graphics cards? Why can't I have this stuff on the CPU?"

    CPUs have been faster than main memory for years. Without main memory that's as fast as the instruction / data cycle time, things are slower than they otherwise could be for that CPU. The problems, I suppose, would be related to getting the info in and out of the CPU at a speed to match the new RAM, and a bus to carry it... but this would certainly be useful if it could be done and commercialized affordably.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by Runaway1956 on Monday May 01 2017, @02:20PM (11 children)

    by Runaway1956 (2926) Subscriber Badge on Monday May 01 2017, @02:20PM (#502269) Journal

    In my experience, memory has been "fast enough" for a long time now. Given sufficient memory, an ancient AMD K6 450 mhz CPU can actually run Windows XP fairly well. That is one gig of PC-100, in this case. With limited memory, a 1 Ghz Athlon struggled to run basically the same installation.

    My computer at home has 24 gig of memory, which is basically obsolete. The computer at work is quite modern and up to date, but it drags ass all day, every day. It only has 1 gig of memory, and runs Windows 7, with a buttload of background tasks. At home, I never wait for anything, except internet. At work, waiting for anything to load is painful. Microsoft needs to shitcan the paging system, and make it clear that computers require 8 gig of memory to run today. In the not-distant future, they're going to need 16 gig minimum, because everything just grows larger and more bloated all the time. Within a decade, we may need to raise the minimum to 64 gig if you want to run a Microsoft OS. (I expect Linux to stay reasonably responsive with far less memory, just because Linux still runs well on old hardware right now.)

    Given the opportunity, yes, I'll take faster memory, faster CPU, faster everything. But, more than anything, I want a machine with ADEQUATE MEMORY! Ideally, your entire operating system, as well as most of your apps can reside in memory, and never make use of virtual memory. The slowest memory on the market today is more than fast enough to make almost any computer sing, if only enough memory is installed.

    • (Score: 3, Interesting) by takyon on Monday May 01 2017, @02:57PM (3 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Monday May 01 2017, @02:57PM (#502283) Journal

      The computer at work is quite modern and up to date, but it drags ass all day, every day. It only has 1 gig of memory, and runs Windows 7, with a buttload of background tasks.

      How is a machine with only 1 GB of RAM considered "quite modern"? I would struggle to find anything under 4 GB on the market today, and they would be landfill laptops and Chromebooks with an absolute minimum of 2 GB of RAM.

      You can probably find someone with a free 4 GB DDR3 module. If not, buy it yourself for $20, put it in the work computer, and take it away when you leave. Nobody will notice and you can put it in your museum later.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by VLM on Monday May 01 2017, @03:09PM

        by VLM (445) on Monday May 01 2017, @03:09PM (#502288)

        Aside from all this desktop stuff, in the server and virtualization host market, nobody ever said their memory bus was too fast.

        From an engineering standpoint it should be possible to make optimizations such that sequential reading is kinda the default and faster at the expense of totally random access. Remember "Row and column" strobes for dram addressing in the 80s or so, you could extend that concept way beyond 2 dimensions (not physically of course?) such that sequential access would require usually 1 or sometimes 2 address segment writes per cycle for a graphics display but totally randomly smacking some random address would take like 8 address segment loads. Like imagine 128 parallel data lines and 8 bits of address and a whole bunch of segment strobes to load up the address 8 bits at a time. I wonder if there's also some weird dual porting stuff going on such that it wouldn't really be your first choice for a CPU.

      • (Score: 2) by Runaway1956 on Monday May 01 2017, @04:47PM

        by Runaway1956 (2926) Subscriber Badge on Monday May 01 2017, @04:47PM (#502347) Journal

        It belongs to contractors, who have spec'd the machines to their own needs/wants. Seriously, machines that are - ohhhh - I guess they are three years old now - with decent CPU's, smallish hard drives, no optical drives, and - only ONE gig of memory. Don't ask me why, or how. They were built cheap, and that's how they run. Our own IT people weren't smart enough to realize they were getting ripped off. The machines in the offices aren't so bad as our machines in the work spaces, but they are still pretty crappy.

      • (Score: 2) by JoeMerchant on Monday May 01 2017, @10:15PM

        by JoeMerchant (3937) on Monday May 01 2017, @10:15PM (#502537)

        This ^^^ - my father bought me a cast-off iMac from his University for $50, the only thing really "wrong" with it was that it only had 1GB of RAM - and it was upgrade capped to 2GB by Apple... We spent another (years ago) $40 to get 2GB of RAM for it and it became downright usable - for single users doing single tasks.

        --
        🌻🌻 [google.com]
    • (Score: 2) by LoRdTAW on Monday May 01 2017, @03:15PM

      by LoRdTAW (3755) on Monday May 01 2017, @03:15PM (#502292) Journal

      In my experience, memory has been "fast enough" for a long time now.

      For the most part, it certainly is. Basic desktop useage really doesn't push memory bandwidth so much as memory usage (code bloat). We also have plenty of CPU. Even a modest dual core i3 can handle most desktop use including low end gaming.

      Microsoft needs to shitcan the paging system, and make it clear that computers require 8 gig of memory to run today.

      It's not Microsoft's fault. They have a minimum as well as an optimal configuration spec. But the problem lies in the OEM's who make tons of money upselling you on a few GB of RAM.

      Dell loves pulling this shit by offering dozens of different models which are nothing more than the same PC with multiple fixed configurations. "Oh, you want more than 8GB RAM *AND* a core i3 in your optiplex? Too bad. Buy the i5 model for another $150 AND pay another $100 for the 8GB model. Or buy that nearly $900 i3 that's about $400 over the base i3. What I wind up doing is buying the base i3 model with 4GB, look up the part number of the OEM RAM module, and buy a matching 4GB module for $30-50. Or buy a compatible 8GB kit from crucial and use the extra 4GB OEM stick in another desktop to double that one to 8GB as well. You can save around $200+ per workstation doing this. Of course that's fine for small shops like mine with less than 20 desktops. If you order by the tens or hundreds, it's less practical and you wind up going with the upsell to eliminate the labour.

      Dell (along with many others) were notorious for selling low end home PC's with barely enough RAM to boot the damn OS as was the case with my friends P4 Dell in the mid 00's with 256MB RAM and XP home. Ran like shit. Opening a web browser was an 30+ second ordeal as the disk chruned and burned shuffling stuff out of RAM into the page file. I had him buy a 1GB kit that was compatible and it was like a whole new computer. And 1GB should have been the XP minimum, not 256 like MS said.

    • (Score: 2) by kaszz on Monday May 01 2017, @05:01PM (4 children)

      by kaszz (4211) on Monday May 01 2017, @05:01PM (#502355) Journal

      The paging system is what makes the difference sometimes between can or can't use a program at all. If you install enough primary memory. This will not be a problem, so take away the options for other people.

      However if people did shitcan Microsoft software then a lot of memory problems would not occur. Besides all these requirements on gigs of primary memory is a indicator of really bad programming with some exceptions like for CAD etc.

      • (Score: 2) by Runaway1956 on Monday May 01 2017, @05:54PM (3 children)

        by Runaway1956 (2926) Subscriber Badge on Monday May 01 2017, @05:54PM (#502412) Journal

        It almost seems like an arms race. Memory gets cheaper, and more plentiful, but shoddy programmers seem to be determined to squander all of that memory.

        • (Score: 2) by kaszz on Monday May 01 2017, @06:39PM (2 children)

          by kaszz (4211) on Monday May 01 2017, @06:39PM (#502443) Journal

          I have noticed the same trend too. But there are ways to avoid it. Open source OS, open source applications.
          And I suspect C++ and other languages has a lot to do with this. On top of people that have no business designing software being shepherded into "programming".

          The CPU trend will be interesting though.. they seem to not clock faster than circa 4.5 GHz. So programmers have to be smarter about that resource or see competition running them over.

          • (Score: 0) by Anonymous Coward on Tuesday May 02 2017, @02:36AM (1 child)

            by Anonymous Coward on Tuesday May 02 2017, @02:36AM (#502623)

            And I suspect C++ and other languages has a lot to do with [bloated programs]. On top of people that have no business designing software being shepherded into "programming".

            I'm not a programmer, so keeping that in mind: is there something wrong with C++ or the way it's being used? What are the better alternatives?

            • (Score: 2) by kaszz on Tuesday May 02 2017, @03:47AM

              by kaszz (4211) on Tuesday May 02 2017, @03:47AM (#502652) Journal

              When you write software in C++ (object oriented) it will too often implicitly suck in a lot of stuff, name space can get overloaded and some programmers like to allocate but free() is less popular.

              Depending on task, use C.

    • (Score: 2) by shortscreen on Monday May 01 2017, @08:22PM

      by shortscreen (2252) on Monday May 01 2017, @08:22PM (#502493) Journal

      Pentium 3s were terribly memory speed limited. In some cases there was hardly any difference between a P3 800MHz or 1.4GHz if they both had PC-133 SDRAM. This was when Intel started building their shitty graphics into the motherboard chipset, so refreshing the display would eat up precious memory bandwidth. Changing the screen mode to a lower resolution and color depth to reduce bus contention would speed things up measurably. Check out these benchmark results from super pi:

      Pentium 3 Coppermine 933MHz (discrete video card) - 2m17s
      Pentium 3 Tualitin 800MHz (i830 graphics) - 2m35s
      Pentium 3 Tualitin 1.33GHz (i830 graphics) - 2m20s
      1.33GHz with lower latency 3-2-2 RAM - 2m06s
      1.33GHz with screen mode set to 800x600 16-bit - 1m57s

      Athlon XPs were also somewhat limited. Although they used DDR, since speeds eventually got up to 2.3GHz or so, on a 200MHz bus the latency got to be even worse than Pentium 3s (but with 64-byte line size instead of 32)

      Athlon 64s and Pentium Ms greatly reduced this bottleneck by lowering latency and improving cache hit rates, respectively.

      The weird thing is that the MOVSD instruction has never been optimized enough for a simple block copy to acheive anything close to theoretical memory throughput. It's always limited by the CPU instead (and read-before-write memory access patterns). I guess new CPUs have a fancier way to do block copies although I have not tried it.

  • (Score: 3, Interesting) by sjames on Monday May 01 2017, @03:21PM

    by sjames (2882) on Monday May 01 2017, @03:21PM (#502296) Journal

    You can, just as soon as the lumbering behemoths get around to producing CPUs that support it. It makes sense for Hynix to target GPUs first as they're more likely to adopt it faster.

    Beyond that though, for a new memory tech, GPUs used for actual graphics tend to be far more tolerant to single bit errors than CPUs. This gives a little time to tune the manufacturing and get the kinks out.

  • (Score: 2) by kaszz on Monday May 01 2017, @04:55PM

    by kaszz (4211) on Monday May 01 2017, @04:55PM (#502352) Journal

    The problem may be related to access patterns. If the CPU pattern is random enough. Then a memory which can only deliver long sequential access will not improve anything.