Silicon Motion Launches PCIe 4.0 NVMe SSD Controllers
Silicon Motion has announced the official launch of their first generation of PCIe 4.0-capable NVMe SSD controllers. These controllers have been on the roadmap for quite a while and have been previewed at trade shows, but the first models are now shipping. The high-end SM2264 and mainstream SM2267/SM2267XT controllers will enable consumer SSDs that move beyond the performance limits of the PCIe 3.0 x4 interface that has been the standard for almost all previous consumer NVMe SSDs.
The high-end SM2264 controller is the successor to Silicon Motion's SM2262(EN) controllers, and the SM2264 brings the most significant changes that add up to a doubling of performance. The SM2264 still uses 8 NAND channels, but now supporting double the speed: up to 1600MT/s. The controller includes four ARM Cortex R8 cores, compared to two cores on SMI's previous client/consumer NVMe controllers. As with most SSD controllers aiming for the high end PCIe 4.0 product segment, the SM2264 is fabbed on a smaller node: TSMC's 12nm FinFET process, which allows for substantially better power efficiency than the 28nm planar process used by the preceding generation of SSD controllers. The SM2264 also includes support for some enterprise-oriented features like SR-IOV virtualization, though we probably won't see that enabled on consumer SSD products. The SM2264 also includes the latest generation of Silicon Motion's NANDXtend ECC system, which switches from a 2kb to 4kB codeword size for the LDPC error correction.
Also at Guru3D.
Related: Silicon Motion Controller to Enable High Speed, Low Cost Portable USB SSDs
(Score: 3, Informative) by takyon on Saturday October 24 2020, @12:39AM
Having large amounts of L4 cache near the CPU, or even microns away from the CPU, is going to be one of the major ways to improve performance going forward. It will be unavoidable. But you could still have a large pool of DRAM in DIMMs further away from the CPU. DRAM on die should not be shortening the life of the CPU to a noticeable extent.
Apple is already pretty much there. They are expanding their ARM SoCs towards their desktop/workstation product lines. They may try to pair discrete GPUs with some of their ARM SoCs for a while, beats me.
Chiplets on interposers are countering yield problems. Small chiplets get great yields and reduce costs. Newer interposer/interconnect technologies minimize the performance impact. AMD can make a tiny 8-core chiplet (will get smaller or grow core count by the "5nm" or "3nm" node), which is a lot for most users in the first place, and you generally won't notice the impact of a second 8-core chiplet introducing more latency. Nvidia, Intel, and AMD are all moving towards some kind of chiplet/multi-chip module design for GPUs.
The "slightly spoiled fruit" of binning/disabled cores is of no consequence to most people. If you get a product with 1-2 6-core chiplets with 2 cores disabled on each, it's going to work just fine.
Non-removable batteries are a nuisance. But the device can continue to have a life as long as you can commit towards plugging it in forever. For example, mount an iPad with dead battery on a wall near an outlet, always plugged in. Or put it on a kickstand somewhere and use it like an Echo Show.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]