Today, we've decided to revisit some of the worst CPUs ever built. To make it on to this list, a CPU needed to be fundamentally broken, as opposed to simply being poorly positioned or slower than expected. The annals of history are already stuffed with mediocre products that didn't quite meet expectations but weren't truly bad.
Note: Plenty of people will bring up the Pentium FDIV bug here, but the reason we didn't include it is simple: Despite being an enormous marketing failure for Intel and a huge expense, the actual bug was tiny. It impacted no one who wasn't already doing scientific computing and the scale and scope of the problem in technical terms was never estimated to be much of anything. The incident is recalled today more for the disastrous way Intel handled it than for any overarching problem in the Pentium micro-architecture.
We also include a few dishonourable mentions. These chips may not be the worst of the worst, but they ran into serious problems or failed to address key market segments. With that, here's our list of the worst CPUs ever made.
- Intel Itanium
- Intel Pentium 4 (Prescott)
- AMD Bulldozer
- Cyrix 6×86
- Cyrix MediaGX
- Texas Instruments TMS9900
Which CPUs make up your list of Worst CPUs Ever Made?
(Score: 2) by isj on Thursday April 23 2020, @04:52PM (2 children)
Please elaborate.
The 8086/8088/80186/80188 had 16-bit segment:offset addressing and "all" programs assumed that the segments were overlapping with 16 bytes offset. There were no way out of that without either breaking compatibility, or virtualizing memory. The 80286 introduced protected mode where the segments were treated as selectors and memory could be moved around and a program could address a whopping 16MB. I don't think virtualizing the whole thing was feasible at that time (1982) due to complexity.
(Score: 1, Informative) by Anonymous Coward on Thursday April 23 2020, @08:36PM (1 child)
>Please elaborate.
At the time, most code needing more than 64k did arithmetic assuming the segment regs were 16x the offset regs. That gave 1M - what IBM took leaving 640k.
Intel decided that this was evil and that they 'owned' the architectural definition of the segment registers. They made the 286 with a 'beautiful' architecture which existing compiled code could not use. About the only folks happy about this was Mot with the 68k.
I remember when Intel came by to try to sell their 286 in the early 80's. As soon as they described the MMU, we had a talk about compatibility and they acted as if they had never considered it as a necessity. They were all proud of their beautiful architecture for segments. They had spent extra gates making something that the PC could market could not use. There were plenty of examples of mmu solutions in the minicomputer world and they just blew it. They learned the importance of compatibility and seemed to be more careful in the 386 and even to now.
The PS-AT and clones were about all that used the part. There were Unix variants that used the MMU, but to get back to DOS mode required a reset to the chip. (I think the keyboard process might have been involved.) Really a sad story driven by architectural arrogance causing them to ignore how their parts were being used.
Worst ever is a really high bar. Again, I nominate the 286. There are plenty of examples of processor designs that turned out bad, but this one is special because it started out ok and had a big market share and then went bad on purpose and almost lost the share. A really special case.
(Score: 2) by TheRaven on Monday April 27 2020, @09:04AM
sudo mod me up