Qualcomm announces the Snapdragon 8cx, an 'extreme' processor for Windows laptops
The "X" stands for "extreme." That's what Qualcomm's marketing department wants you to think about the new eight-core Snapdragon 8cx.
It's a brand-new processor for always-connected Windows laptops and 2-in-1 convertible PCs, and from Qualcomm's perspective, it might seem a little extreme. Physically, it's the largest processor the company has ever made, with the most powerful CPU and GPU Qualcomm has devised yet. Qualcomm says it'll be the first 7nm chip for a PC platform, beating a struggling Intel to the punch, and the biggest performance leap for a Snapdragon ever. The company's promising "amazing battery life," and up to 2Gbps cellular connectivity.
The TDP is 7 Watts, and the chip supports up to 16GB of LPDDR4x RAM.
Previously, a "Snapdragon 1000" for laptops was said to be in the works, but with a 12 Watt TDP.
See also: Firefox running on a Qualcomm 8cx-powered PC feels surprisingly decent
Previously: First ARM Snapdragon-Based Windows 10 S Systems Announced
Snapdragon 845 Announced
ARM Aims to Match Intel 15-Watt Laptop CPU Performance
Intel Reportedly "Petitioned Microsoft Heavily" to Use x86 Instead of ARM Chips in Surface Go
(Score: 4, Interesting) by pTamok on Saturday December 08 2018, @11:54AM (4 children)
Nothing in the publicity to say whether it supports ECC RAM.
If you are looking at how to minimise bit-errors now that filesystems that support checksumming for both data and metadata are generally available (ZFS is the best known), then protecting the data in-memory looks to be a good idea.
Some say that running servers without ECC is a bad idea. The same really ought to apply to personal computing devices.
(Score: 0) by Anonymous Coward on Saturday December 08 2018, @01:40PM
ZFS is way over-hyped and largely a waste of time and resources.
(Score: 1, Informative) by Anonymous Coward on Saturday December 08 2018, @09:22PM
Have this exact problem. The system had non-ECC memory, was running linux, and due to linux's file caching, the whole file was in memory. Person checksummed the in-memory file twice, thinking it was actually reading off the disk (because until recently, who had enough memory to have a whole large file in it?) and it checked out. Rebooted and decided to recheck it... different value. The initial two checksums had been wrong thanks to a bit error in the cached read-only copy of the file. If it had been committed whatever error it had could have corrupted the on-disk copy of the file.
Point being, as unlikely as it seems, the more memory you have, the more chances for a random bit flip damaging something important, even if you won't notice it now.
(Score: 2) by darkfeline on Tuesday December 11 2018, @09:48AM (1 child)
Maybe, it's a question of scale. A datacenter running a million servers each with terabytes of RAM may encounter bit errors daily. An average consumer laptop, not so much.
Even if a bit flip happens, it can't do that much damage. In rough order of likeliness:
1. Nothing happens.
2. Video file is slightly corrupted (unnoticeable).
3. Some process crashes (unnoticeable, process gets restarted, worst case OS crash)
4. Some user document gets corrupted (restore from backup, many people use Dropbox et al now).
5. Some important file gets corrupted, the file is still well formatted, error goes unnoticed and causes problems for user later.
By 5, we're looking at getting struck by lightning levels of improbability.
Join the SDF Public Access UNIX System today!
(Score: 1) by pTamok on Tuesday December 11 2018, @11:06AM
https://wiki.lspace.org/mediawiki/Million-to-one_chance [lspace.org]
But also:
https://www.zdnet.com/article/dram-error-rates-nightmare-on-dimm-street/ [zdnet.com]
Which links to:
DRAM Errors in the Wild: A Large-Scale Field Study (SIGMETRICS/Performance’09, June 15–19, 2009, Seattle, WA, USA.) [toronto.edu]