Stories
Slash Boxes
Comments

SoylentNews is people

posted by FatPhil on Tuesday September 28 2021, @01:24AM   Printer-friendly
from the chip-crisis?-what-chip-crisis? dept.

Blazing fast PCIe 5.0 SSD prototype hits sequential read speeds of 14,000 MB/s:

Advancements in the storage segment are the unsung heroes in today's world of computing. While many users tend to focus on the speed of their CPU, GPU or even the higher refresh rate of their displays, the increasingly quick solid state drives are in part responsible for the performance improvements of Sony's and Microsoft's next-gen consoles. But while the PlayStation 5 and Xbox Series X/S rely on PCIe 4.0 SSDs, a Japanese memory manufacturer is already finalizing the development of its blazing fast PCIe 5.0 storage solutions.

In a recent presentation, Kioxia has now revealed how quick PCIe 5.0 SSDs can truly be. While the throughput of the PCIe 5.0 interface at 32GB/s per lane is exactly twice as high compared to PCIe 4.0, the company's first prototype has apparently reached sequential read speeds as high as 14,000MB/s. That is also twice as fast as Kioxia's currently top of the line PCIe 4.0 drive.

Even though these read speeds certainly seem impressive, the write speeds of Kioxia's PCIe 5.0 SSD are similarly spectacular. The official benchmark says the drive can reach sequential write speeds of 7,000MB/s, which is a 67% improvement to the predecessor. Overall, these speeds seem to be absolute overkill for most use cases, which is why these drives are intended for use in a professional server setting. Nevertheless, the rapid advancements in storage speeds certainly deserve more attention than the often incremental upgrades in the CPU and GPU sector.

Of course, there's more to storage than speed, there's reliability, for example. Would any gamers want to prove the "professional server setting" assumption wrong?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by bzipitidoo on Tuesday September 28 2021, @03:33AM (4 children)

    by bzipitidoo (4388) on Tuesday September 28 2021, @03:33AM (#1182095) Journal

    Thought SSDs made read order almost entirely irrelevant to read speed. It's why they are so much faster than HDDs--- no seek time.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Informative) by krishnoid on Tuesday September 28 2021, @04:48AM (1 child)

    by krishnoid (1156) on Tuesday September 28 2021, @04:48AM (#1182102)

    They do, but as a result, tagged command queueing [wikipedia.org] somehow still stayed valuable:

    • With hard disk drives that are really slow, having a queue of requests means the block requests could be satisfied in one or two passes by reordering them to follow the path of a single drive head sweep while the platter was rotating
    • With solid state drives that are really fast, having a queue of requests means multiple block requests can be satisfied immediately if the memory/controller is fast enough, since it's all random access anyway

    I think there's some fundamental wisdom about the "queueing" concept here, but I don't know excatly how you'd word it.

    • (Score: 1, Interesting) by Anonymous Coward on Wednesday September 29 2021, @07:16AM

      by Anonymous Coward on Wednesday September 29 2021, @07:16AM (#1182652)

      Almost all disks now use NCQ not TCQ. NCQ is important because SSDs are so fast that instead of the host waiting on the drive the drive often has to wait for the host. That leads directly into the major improvement that SSDs offer by using NVMe is that you can now have multiple queues that are much longer than with HDDs. This means that they don't have to sit and wait for the host as much and also allows for the SSD to run almost all actions concurrently and make more operations parallel because the requests made can match the internal architecture of both the SSD and the host system it is in better.

  • (Score: 2) by fraxinus-tree on Wednesday September 29 2021, @09:44AM (1 child)

    by fraxinus-tree (5590) on Wednesday September 29 2021, @09:44AM (#1182684)

    Sorry to say it, but the popular substandard wear-leveling layers made the sequental/random distinction relevant again. And what is worse, the write order matters as well (when reading). It is high time to trash the block-device concept for good and expose the nand directly to a reasonable filesystem driver. What we now do is patching both the wrong tools for the task (the block layer and the block-related fs layer) with things like "trim" and "nvme".

    • (Score: 1, Interesting) by Anonymous Coward on Thursday September 30 2021, @10:08PM

      by Anonymous Coward on Thursday September 30 2021, @10:08PM (#1183211)

      Sadly it is a bit more complicated than that. Back when SSDs came out, they did that directly without the FTL. But that caused problems because drives would lie, different types of drives require different treatment, and many admins would treat them like HDDs anyway. They overcorrected by making overcomplex FTLs but ended up hiding too much too. Now I think they are going to find a better middle ground, through the ZNS and its kernel-side zone file systems becoming more common as is the understanding that the device and the kernel can cooperate to their mutual benefit without stepping on each-other's feet.