In 2011 AMD released the Bulldozer architecture, with a somewhat untraditional implementation of the "multicore" technology. Now, 4 years later, they are sued for false advertising, fraud and other "criminal activities". From TFA:
In claiming that its new Bulldozer CPU had "8-cores," which means it can perform eight calculations simultaneously, AMD allegedly tricked consumers into buying its Bulldozer processors by overstating the number of cores contained in the chips. Dickey alleges the Bulldozer chips functionally have only four cores—not eight, as advertised.
(Score: 3, Interesting) by edIII on Saturday November 07 2015, @03:59AM
Wow. Is that an interesting idea or what? Plenty of stuff is licensed that way. If he can prove it only has 4 cores in a court of law, then anybody with licenses just got double what they needed, or a slam dunk lawsuit against AMD for the difference.
It seems logical to me that you would only pay for each literal core, not the extra virtual core from hyperthreading (or similar). If AMD promised a literal 8 cores, but then did some funny 'unusual implementation' where there weren't literally 8 processing cores then this gentleman has them by the short and curlies so to speak.
Ironic that in argument that is sure to be about technicalities, that we have nothing technical yet to argue over. Love to know what they mean specifically. From Wikipedia [wikipedia.org] it doesn't look like 8 cores to me.
If I'm reading the part in bold correct, it does indeed sound like there isn't really 8 cores.
Technically, lunchtime is at any moment. It's just a wave function.
(Score: 2) by frojack on Saturday November 07 2015, @04:36AM
In terms of hardware complexity and functionality, this module is equal to a dual-core processor in its integer power, and to a single-core processor in its floating-point power: for each two integer cores, there is one floating-point core. The floating-point cores are similar to a single core processor that has the SMT ability, which can create a dual-thread processor but with the power of one (each thread shares the resources of the module with the other thread) in terms of floating point performance.
So the upshot of that is if these processors were not used for gaming and complex numerical calculation, and reserved for the server market, there's a good chance no one would ever notice this floating point limitation.
Most of the work done in server situations is integer math, (well, most of it is just byte slinging hither and yon). Encryption may be some of the most taxing work in the server market.
But I have no idea how those processors were marketed.
No, you are mistaken. I've always had this sig.
(Score: 2) by Pino P on Saturday November 07 2015, @03:06PM
Most of the work done in server situations is integer math, (well, most of it is just byte slinging hither and yon).
Unless the server is, say, transcoding uploaded video to fifteen different formats for streaming to viewers. But perhaps a lot of that can be written in OpenCL and run on the integrated GPGPU. Does a Xeon even have an GPGPU?
(Score: 2) by frojack on Sunday November 08 2015, @04:49AM
Yes, but most streaming stuff isn't transcoded from one format to the other every time someone requests a stream.
You do it once, and save the file, then chuck what ever format they ask down the socket as fast as the requester can consume it.
Admittedly, you still have a transcoding task just to arrive at a copy for each format. And maybe these processors do that just fine, and maybe they don't, I donno.
No, you are mistaken. I've always had this sig.
(Score: 2) by Pino P on Sunday November 08 2015, @06:48PM
most streaming stuff isn't transcoded from one format to the other every time someone requests a stream.
If someone is sending a live stream that has few simultaneous viewers, the server might end up serving the transcoded stream at each detail level to one viewer or at most a handful. Even apart from live streaming, I'm told some adaptive streaming platforms do a real-time transcode for a few seconds rather than waiting for the next keyframe to switch detail levels when the Internet connection's throughput changes or when the user fast-forwards or rewinds.
Admittedly, you still have a transcoding task just to arrive at a copy for each format.
Even apart from live streaming, uploaders on big video sharing sites such as Dailymotion and YouTube initiate so many transcoding tasks that I shudder to think of how many must be running at once.
(Score: 5, Informative) by Hairyfeet on Sunday November 08 2015, @01:21AM
You are reading it wrong because you are ignoring this part, bold for highlight.."Two symmetrical 128-bit FMAC (fused multiply–add capability) floating-point pipelines per module that can be unified into one large 256-bit-wide unit if one of the integer cores dispatches AVX instruction and two symmetrical x87/MMX/SSE capable FPPs for backward compatibility with SSE2 non-optimized software."
So each core still has a FPU, it simply has a weaker 128bit FPU that can be combined into a single 256bit FPU if AVX instructions are required. The reason why they did this their engineers have spoken at length about, they believed multicore processing was the future (which it is) and would be upon us as quickly as 64bit computing was (which it wasn't) and so bet on having more cores versus having higher performance per core. If you are like me and are using plenty of multicore aware tasks like transcoding or effects layering? This kicks ass because having high single core performance would be slower than having multicores working on the task, while for someone that used nothing but single process programs it would be a better choice to go for a higher per core performance over having more cores.
So it isn't a "half core", it is simply a different approach to the same task.
ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
(Score: 3, Informative) by edIII on Sunday November 08 2015, @02:05AM
Thanks for the explanation
Technically, lunchtime is at any moment. It's just a wave function.