Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday April 23 2020, @12:33PM   Printer-friendly
from the Sorry-about-that-boss! dept.

Worst CPUs:

Today, we've decided to revisit some of the worst CPUs ever built. To make it on to this list, a CPU needed to be fundamentally broken, as opposed to simply being poorly positioned or slower than expected. The annals of history are already stuffed with mediocre products that didn't quite meet expectations but weren't truly bad.

Note: Plenty of people will bring up the Pentium FDIV bug here, but the reason we didn't include it is simple: Despite being an enormous marketing failure for Intel and a huge expense, the actual bug was tiny. It impacted no one who wasn't already doing scientific computing and the scale and scope of the problem in technical terms was never estimated to be much of anything. The incident is recalled today more for the disastrous way Intel handled it than for any overarching problem in the Pentium micro-architecture.

We also include a few dishonourable mentions. These chips may not be the worst of the worst, but they ran into serious problems or failed to address key market segments. With that, here's our list of the worst CPUs ever made.

  1. Intel Itanium
  2. Intel Pentium 4 (Prescott)
  3. AMD Bulldozer
  4. Cyrix 6×86
  5. Cyrix MediaGX
  6. Texas Instruments TMS9900

Which CPUs make up your list of Worst CPUs Ever Made?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by isj on Thursday April 23 2020, @04:52PM (2 children)

    by isj (5249) on Thursday April 23 2020, @04:52PM (#986102) Homepage

    Please elaborate.

    The 8086/8088/80186/80188 had 16-bit segment:offset addressing and "all" programs assumed that the segments were overlapping with 16 bytes offset. There were no way out of that without either breaking compatibility, or virtualizing memory. The 80286 introduced protected mode where the segments were treated as selectors and memory could be moved around and a program could address a whopping 16MB. I don't think virtualizing the whole thing was feasible at that time (1982) due to complexity.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 1, Informative) by Anonymous Coward on Thursday April 23 2020, @08:36PM (1 child)

    by Anonymous Coward on Thursday April 23 2020, @08:36PM (#986200)

    >Please elaborate.

    At the time, most code needing more than 64k did arithmetic assuming the segment regs were 16x the offset regs. That gave 1M - what IBM took leaving 640k.

    Intel decided that this was evil and that they 'owned' the architectural definition of the segment registers. They made the 286 with a 'beautiful' architecture which existing compiled code could not use. About the only folks happy about this was Mot with the 68k.

    I remember when Intel came by to try to sell their 286 in the early 80's. As soon as they described the MMU, we had a talk about compatibility and they acted as if they had never considered it as a necessity. They were all proud of their beautiful architecture for segments. They had spent extra gates making something that the PC could market could not use. There were plenty of examples of mmu solutions in the minicomputer world and they just blew it. They learned the importance of compatibility and seemed to be more careful in the 386 and even to now.

    The PS-AT and clones were about all that used the part. There were Unix variants that used the MMU, but to get back to DOS mode required a reset to the chip. (I think the keyboard process might have been involved.) Really a sad story driven by architectural arrogance causing them to ignore how their parts were being used.

    Worst ever is a really high bar. Again, I nominate the 286. There are plenty of examples of processor designs that turned out bad, but this one is special because it started out ok and had a big market share and then went bad on purpose and almost lost the share. A really special case.

    • (Score: 2) by TheRaven on Monday April 27 2020, @09:04AM

      by TheRaven (270) on Monday April 27 2020, @09:04AM (#987484) Journal
      The 286 was designed for compatibility. You could set up both the GDT and LDT to provide a contiguous 1MiB window into your 16MiB address space (each descriptor entry providing a 64KiB segment that was 16 bytes offset from the previous one). DOS .COM binaries were restricted to a single segment value and so could be relocated anywhere in the 16MiB address space (and protected). It wasn't the compatibility mode that killed the 286, it was largely a combination of two factors. The first was that the only way of returning from protected mode was to do a reset via the keyboard controller (which was *very* slow). The second was that for non-compatible uses the segment table was very clunky. The 386 cleaned this up by introducing VM86 mode, where you got a virtual 8086 machine, distinct from the 32-bit address space model. If anything, this was worse backwards compatibility because the 8086-compatible and 386-enhanced modes were completely separate worlds.
      --
      sudo mod me up