Arthur T Knackerbracket has found the following story:
Cray has revealed that its products' Q2 profits have literally gone up in smoke.
The company this week announced second quarter revenue of US$100.2m, down from $186.2m in 2015's corresponding quarter. That dip meant the company incurred a loss of $13.1m compared to last year's $5.8m profit.
Things aren't going to be much easier in Q3, due to “a very recent electrical smoke event caused by a failed manufacturing facility power component that will delay the Company's ability to deliver on some customer contracts in 2016, including an impact on anticipated third quarter revenue.”
On the company's earnings call CEO Peter Ungaro said the smoke “damaged five relatively smaller customer systems that were being tested and prep[ped] for shipping, and for which we expected to achieve acceptances before the end of the year including some in the third quarter.”
“Some of these systems were key pieces of larger customer solutions,” he added. “And as a result, their impact to our overall revenue outlook was more significant than just the value of the revenue type to those systems themselves. This event just happened and we're still evaluating the full extent of the impact, as well as our recovery plan. But I want to note that the majority of the loss is expected to be covered by insurance."
(Score: 3, Informative) by stormwyrm on Saturday August 06 2016, @04:28AM
For some classes of problems nothing else but a bespoke supercomputer will do. If you're doing something like, say, trying to find solutions to a lattice quantum chromodynamics problem, or for a massively high resolution finite element method problem, or trying to use the general number field sieve to factor a 1024 bit RSA key [soylentnews.org] (cough, cough, NSA, cough), a rack of COTS systems simply does not have enough bandwidth and is too high latency to make the problem feasible. For instance the matrix for the the linear reduction of a 1024 bit number using the general number field sieve requires a few terabytes of very low latency memory. Where can you find a COTS system that has anything like that? Sure, if you had a thousand nodes with several gigabytes each they might in aggregate have a few terabytes of RAM, but the latency to access it even using 10 gigE would still be too high to make the problem practical. Supercomputing isn't just about TFLOPS: it is every bit much about very low latency I/O too.
Numquam ponenda est pluralitas sine necessitate.