Today, we've decided to revisit some of the worst CPUs ever built. To make it on to this list, a CPU needed to be fundamentally broken, as opposed to simply being poorly positioned or slower than expected. The annals of history are already stuffed with mediocre products that didn't quite meet expectations but weren't truly bad.
Note: Plenty of people will bring up the Pentium FDIV bug here, but the reason we didn't include it is simple: Despite being an enormous marketing failure for Intel and a huge expense, the actual bug was tiny. It impacted no one who wasn't already doing scientific computing and the scale and scope of the problem in technical terms was never estimated to be much of anything. The incident is recalled today more for the disastrous way Intel handled it than for any overarching problem in the Pentium micro-architecture.
We also include a few dishonourable mentions. These chips may not be the worst of the worst, but they ran into serious problems or failed to address key market segments. With that, here's our list of the worst CPUs ever made.
- Intel Itanium
- Intel Pentium 4 (Prescott)
- AMD Bulldozer
- Cyrix 6×86
- Cyrix MediaGX
- Texas Instruments TMS9900
Which CPUs make up your list of Worst CPUs Ever Made?
(Score: 5, Interesting) by isj on Thursday April 23 2020, @04:04PM (9 children)
They all taught us some lessons, and some of them drove compiler development.
The Itanium had some interesting features:
Intel underestimated the complexity of implementing such a beast, and also underestimated the time required to make compilers produce good code. But the idea of predicated bundles of instructions is pretty neat. It also drove the invention of a better calling convention (ever heard about the Itanuim ABI?). If the shift to 64-bit had been done more gradually then we might have been stuck with sub-optimal calling convention now.
(Score: 5, Interesting) by KilroySmith on Thursday April 23 2020, @04:46PM (3 children)
The concept of the Itanium was great; still is. I never used one, so I don't know about the quality of the hardware implementation.
The Itanium's downfall is the piss-poor state of software engineering. They couldn't write a compiler that demonstrated the Itanium working faster than a contemporary x86, so potential customers couldn't be bothered with the new architecture.
In a modern x86, a significant amount of die space is devoted to solving parallelism and superscalar problems. The intention of the Itanium was to solve those problems at compile time, freeing up silicon area for other functions - perhaps more cores, or more cache, or more features (AVX-512, anyone?). It seems like a simple problem - after all, the hardware designers do a reasonable job of it in current x86 architectures - but for "reasons" software was never able to deliver the same level of parallelism.
(Score: 2) by isj on Thursday April 23 2020, @05:03PM
I've used Itaniums at my work. They performed fine for our workloads.
Yes, modern x86 CPUs use a lot of die space untangling the mess of register re-use and OoO execution. The "belt" architecture from Mill Computing tries to avoid that - it is an interesting architecture. I hope they succeed so we have more fun architectures to play with instead of micro-improvements of x86.
(Score: 4, Insightful) by sjames on Thursday April 23 2020, @08:02PM
Part of that was Intel not wanting to share enough information for anyone but Intel to make a decent compiler. I say only part because Intel wasn't able to make their compiler produce fast code for Itanic either.
Adding insult to injury, Itanic was eye-wateringly expensive (about $10,000 each IIRC) but didn't have the performance to back it up.
I wouldn't blame it all on the software guys, unlike the hardware, the compiler didn't have the advantage of seeing what branches have already been taken by the actual code with the actual data. To get that information, the compiler would need an emulator and a representative input dataset.
(Score: 2) by epitaxial on Friday April 24 2020, @02:05AM
I always wanted an Itanium box but they are still expensive on eBay. The cheapest would be the HP rx2660 or similar. Run the latest version of OpenVMS!
(Score: 2) by Bot on Thursday April 23 2020, @07:44PM (2 children)
>the Itanuim ABI?
>Itanuim
I guess that such an ABI has problems with maintaining endianness :D
Account abandoned.
(Score: 3, Interesting) by isj on Thursday April 23 2020, @07:47PM (1 child)
Close enough.
BTW: the Itanium was bi-endian. big-endian when running HP-UX and little-endian when running Linux.
(Score: 2) by Bot on Friday April 24 2020, @03:52PM
Yeh I know one motherboard who went to bed with a Bi curious. Horrible experience.
Account abandoned.
(Score: 3, Informative) by TheRaven on Monday April 27 2020, @08:57AM (1 child)
When people who aren't historians talk about the Itanium ABI, they typically mean the Itanium C++ ABI. This had absolutely nothing to do with calling conventions. On Itanium, you could not implement setjmp and longjmp the way that most architectures do it. As a result, HP defined a standard for stack unwinding using DWARF unwind tables and a layered set of APIs that provided low-level abstractions and language-specific parts. This was essential for C++ exceptions to work on Itanium. As a side effect, they also specified a load of other bits of C++ (vtable layouts, class layouts) and an interface for all of the dynamic behaviours. This specification has outlasted Itanium and remains the de-facto standard ABI for C++ everywhere except Windows (and AArch32, which has a few [documented] tweaks to the Itanium ABI). There are three widely used implementations of the run-time library (libsupc++, libcxxrt, libc++abi) and they can all support multiple compilers and standard-library implementations as a result of this clean layering in the design.
Most of this has absolutely nothing to do with Itanium though, it was just the first time anyone had properly specified a C++ ABI and so it was the one that everyone adopted. Until that point, every C++ compiler invented its own ABI and didn't document it publicly so interoperability was very hard. This ABI was documented and GCC supported it, so any other compiler that implemented it could avoid the effort of designing its own and could guarantee C++ interop with at least one other compiler. Once a few compiler supported it (GCC, XLC, ICC, and so on) the incentives were strongly aligned with supporting the standard.
sudo mod me up
(Score: 2) by isj on Monday April 27 2020, @04:19PM
I agree that it could have been another CPU than Itanium that cause a common calling convention (+ other stuff) to be implemented. The SPARC CPU strangely didn't drive it. It has register windows so longjmp/setjmp would have to be different there too.
Just so people know the mess of calling convention on x86 here is a list of calling conventions I have seen on x86