Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday June 26 2019, @09:59PM   Printer-friendly
from the marketing-dimension dept.

SK Hynix Starts Production of 128-Layer 4D NAND, 176-Layer Being Developed

SK Hynix has announced it has finished development of its 128-layer 1 terabit 3D TLC NAND flash. The new memory features the company's charge trap flash (CTF) design, along with the peripheral under cells (PUC) architecture that the company calls '4D' NAND, announced some time ago. The new 128-layer TLC NAND flash devices will ship to interested parties in the second half of this year, and SK Hynix intends to offer products based on the new chips in 2020.

[...] In the first half of next year SK Hynix promises to roll out its UFS 3.1 storage products based on the new 1 Tb devices. The company plans to offer 1 TB UFS 3.1 chips that will consume up to 20% less [power] when compared to similar products that use 512 Gb ICs.

[...] String stacking technology, as well as the multi-stacked design, will enable SK Hynix to keep increasing the number of layers. SK Hynix says that it is currently developing 176-layer 4D NAND flash, but does not disclose when it is expected to become available.

Previously: "String-Stacking" Being Developed to Enable 3D NAND With More Than 100 Layers
SK Hynix Developing 96 and 128-Layer TLC 3D NAND

Related: Expect 20-30% Cheaper NAND in Late 2018
Micron: 96-Layer 3D NAND Coming, 3D XPoint Sales Disappoint
Western Digital Samples 96-Layer 3D QLC NAND with 1.33 Tb Per Die
Samsung Shares Plans for 96-Layer TLC NAND, QLC NAND, and 2nd-Generation "Z-NAND"


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by JoeMerchant on Thursday June 27 2019, @02:45AM (7 children)

    by JoeMerchant (3937) on Thursday June 27 2019, @02:45AM (#860356)

    One use case is raw VR video

    Sure, for that barely perceptible quality improvement, let's balloon the storage size 10x.

    I'll get more excited about applications like this when the bandwidth in and out of the massive storage devices improves by a factor of 10x or more. It's already distressing to me how long it takes to mirror 1.5TB of data from one drive to another.

    Video stored at a datarate of ~1GB per hour is pretty good already, any more resolution would be like jacking up still photos from 6MP to 60MP - you're not going to really enjoy or notice it unless you're zooming in and focusing on a subset of the whole field of view, and going from 95% quality jpeg to full raw is more subtle still.

    --
    🌻🌻 [google.com]
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by takyon on Thursday June 27 2019, @03:06AM (6 children)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday June 27 2019, @03:06AM (#860366) Journal

    True. We're gonna need post-NAND universal memory instead of NAND to keep the sustained write speeds up. QLC NAND kills that. Massively layered SLC NAND could be a good stopgap, but it seems like most of the consumers drives *must* use TLC or QLC.

    I give justification for absurd resolution VR in the linked comment [soylentnews.org]. Sure, it's very large and not strictly necessary, but once the displays are available people will be able to judge it for themselves. Compression can be involved for viewers but it should probably be captured raw.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by JoeMerchant on Thursday June 27 2019, @02:26PM (5 children)

      by JoeMerchant (3937) on Thursday June 27 2019, @02:26PM (#860521)

      Maybe it's because my eyes are old, but when I run my 4K displays at 1080p, it's very rare that I can perceive any difference at all - unless I've got my face so close to the screen that I can't see the whole thing.

      --
      🌻🌻 [google.com]
      • (Score: 2) by takyon on Thursday June 27 2019, @02:49PM (4 children)

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday June 27 2019, @02:49PM (#860536) Journal

        That's exactly my proposal for VR.

        You'll notice that my proposed 220 degrees horizontal, 150 degrees vertical field of view is more than what your two eyes can see if you're staring straight ahead. It takes into account peripheral vision from keeping your head locked but rotating your eyes in your sockets.

        It would be kind of like shoving your face right in your display until you can't even see the sides. And while you may not be able to make pixels out even when doing that (I wouldn't know, I'm working with 768p where it is extremely obvious), the jump to 16K should ensure that no aliasing or other effects are noticeable. The screen door effect needs to be eliminated too, which could be done by either increasing resolution and/or decreasing pixel spacing with a better display technology.

        On top of that, foveated rendering would help grease the wheels by only rendering a tiny potion of the screen in high resolution at any given millisecond, based on where your eyes are pointing. This applies to gaming/dynamically rendered graphics; it could be applied to precaptured or live video to lower the headset internal bandwidth burden, but isn't likely to lower the bandwidth needed to stream or livestream the video (you need the whole thing).

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by JoeMerchant on Thursday June 27 2019, @03:16PM (1 child)

          by JoeMerchant (3937) on Thursday June 27 2019, @03:16PM (#860549)

          The Microsoft VR headsets I've tried suffer from a horribly small field of view - definitely more immersive if you can get the peripheral covered as well, and at least "foveal" resolution anywhere your eyes can point.

          While eye-tracking and foveated rendering can save power, I think the early software/systems development would go better if you just hook the thing up to a sufficient power (and cooling) source and blast full res across the whole 16K pixels. Once you've got that proven, trimming cost, weight, power can be an enhancement for the investors to back on the road to commercialization.

          Assuming civilization doesn't fall in the meantime, 16K nanopitch displays are definitely coming, eventually. The group that has a killer app for them already developed and proven should profit nicely.

          --
          🌻🌻 [google.com]
        • (Score: 2) by JoeMerchant on Thursday June 27 2019, @03:31PM (1 child)

          by JoeMerchant (3937) on Thursday June 27 2019, @03:31PM (#860560)

          One other thought - I used to be non-plussed by the idea of direct retinal projection, but as my corneas continue to stiffen and medical science continues to waffle about effective solutions for that, the idea starts to hold more and more appeal, and when you consider that the majority of the world's wealth is controlled by presbyopes...

          --
          🌻🌻 [google.com]
          • (Score: 2) by takyon on Thursday June 27 2019, @03:43PM

            by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday June 27 2019, @03:43PM (#860565) Journal

            My view is that retinal projection will be the technique of choice for augmented reality glasses.

            It would be interesting to see a combination of AR and VR using retinal projection. Flat device, possibly indistinguishable from glasses (still should have two front-facing cameras if possible), and you could manually add an opaque cover to switch from AR to VR mode.

            --
            [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]