Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday April 23 2020, @12:33PM   Printer-friendly
from the Sorry-about-that-boss! dept.

Worst CPUs:

Today, we've decided to revisit some of the worst CPUs ever built. To make it on to this list, a CPU needed to be fundamentally broken, as opposed to simply being poorly positioned or slower than expected. The annals of history are already stuffed with mediocre products that didn't quite meet expectations but weren't truly bad.

Note: Plenty of people will bring up the Pentium FDIV bug here, but the reason we didn't include it is simple: Despite being an enormous marketing failure for Intel and a huge expense, the actual bug was tiny. It impacted no one who wasn't already doing scientific computing and the scale and scope of the problem in technical terms was never estimated to be much of anything. The incident is recalled today more for the disastrous way Intel handled it than for any overarching problem in the Pentium micro-architecture.

We also include a few dishonourable mentions. These chips may not be the worst of the worst, but they ran into serious problems or failed to address key market segments. With that, here's our list of the worst CPUs ever made.

  1. Intel Itanium
  2. Intel Pentium 4 (Prescott)
  3. AMD Bulldozer
  4. Cyrix 6×86
  5. Cyrix MediaGX
  6. Texas Instruments TMS9900

Which CPUs make up your list of Worst CPUs Ever Made?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by isj on Thursday April 23 2020, @04:04PM (9 children)

    by isj (5249) on Thursday April 23 2020, @04:04PM (#986064) Homepage

    They all taught us some lessons, and some of them drove compiler development.

    The Itanium had some interesting features:

    • instructions were bundled into 3 and predicated (bits to select execution or cancellation)
    • 128 registers
    • register windows, explicitly controlled
    • branch registers which could be loaded ahead of time giving the CPU hints about where it should prefect code from

    Intel underestimated the complexity of implementing such a beast, and also underestimated the time required to make compilers produce good code. But the idea of predicated bundles of instructions is pretty neat. It also drove the invention of a better calling convention (ever heard about the Itanuim ABI?). If the shift to 64-bit had been done more gradually then we might have been stuck with sub-optimal calling convention now.

    Starting Score:    1  point
    Moderation   +3  
       Insightful=1, Interesting=1, Informative=1, Total=3
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 5, Interesting) by KilroySmith on Thursday April 23 2020, @04:46PM (3 children)

    by KilroySmith (2113) on Thursday April 23 2020, @04:46PM (#986099)

    The concept of the Itanium was great; still is. I never used one, so I don't know about the quality of the hardware implementation.

    The Itanium's downfall is the piss-poor state of software engineering. They couldn't write a compiler that demonstrated the Itanium working faster than a contemporary x86, so potential customers couldn't be bothered with the new architecture.

    In a modern x86, a significant amount of die space is devoted to solving parallelism and superscalar problems. The intention of the Itanium was to solve those problems at compile time, freeing up silicon area for other functions - perhaps more cores, or more cache, or more features (AVX-512, anyone?). It seems like a simple problem - after all, the hardware designers do a reasonable job of it in current x86 architectures - but for "reasons" software was never able to deliver the same level of parallelism.

    • (Score: 2) by isj on Thursday April 23 2020, @05:03PM

      by isj (5249) on Thursday April 23 2020, @05:03PM (#986110) Homepage

      I've used Itaniums at my work. They performed fine for our workloads.

      Yes, modern x86 CPUs use a lot of die space untangling the mess of register re-use and OoO execution. The "belt" architecture from Mill Computing tries to avoid that - it is an interesting architecture. I hope they succeed so we have more fun architectures to play with instead of micro-improvements of x86.

    • (Score: 4, Insightful) by sjames on Thursday April 23 2020, @08:02PM

      by sjames (2882) on Thursday April 23 2020, @08:02PM (#986186) Journal

      Part of that was Intel not wanting to share enough information for anyone but Intel to make a decent compiler. I say only part because Intel wasn't able to make their compiler produce fast code for Itanic either.

      Adding insult to injury, Itanic was eye-wateringly expensive (about $10,000 each IIRC) but didn't have the performance to back it up.

      I wouldn't blame it all on the software guys, unlike the hardware, the compiler didn't have the advantage of seeing what branches have already been taken by the actual code with the actual data. To get that information, the compiler would need an emulator and a representative input dataset.

    • (Score: 2) by epitaxial on Friday April 24 2020, @02:05AM

      by epitaxial (3165) on Friday April 24 2020, @02:05AM (#986349)

      I always wanted an Itanium box but they are still expensive on eBay. The cheapest would be the HP rx2660 or similar. Run the latest version of OpenVMS!

  • (Score: 2) by Bot on Thursday April 23 2020, @07:44PM (2 children)

    by Bot (3902) on Thursday April 23 2020, @07:44PM (#986179) Journal

    >the Itanuim ABI?

    >Itanuim

    I guess that such an ABI has problems with maintaining endianness :D

    --
    Account abandoned.
    • (Score: 3, Interesting) by isj on Thursday April 23 2020, @07:47PM (1 child)

      by isj (5249) on Thursday April 23 2020, @07:47PM (#986181) Homepage

      Close enough.

      BTW: the Itanium was bi-endian. big-endian when running HP-UX and little-endian when running Linux.

      • (Score: 2) by Bot on Friday April 24 2020, @03:52PM

        by Bot (3902) on Friday April 24 2020, @03:52PM (#986516) Journal

        Yeh I know one motherboard who went to bed with a Bi curious. Horrible experience.

        --
        Account abandoned.
  • (Score: 3, Informative) by TheRaven on Monday April 27 2020, @08:57AM (1 child)

    by TheRaven (270) on Monday April 27 2020, @08:57AM (#987483) Journal

    It also drove the invention of a better calling convention (ever heard about the Itanuim ABI?).

    When people who aren't historians talk about the Itanium ABI, they typically mean the Itanium C++ ABI. This had absolutely nothing to do with calling conventions. On Itanium, you could not implement setjmp and longjmp the way that most architectures do it. As a result, HP defined a standard for stack unwinding using DWARF unwind tables and a layered set of APIs that provided low-level abstractions and language-specific parts. This was essential for C++ exceptions to work on Itanium. As a side effect, they also specified a load of other bits of C++ (vtable layouts, class layouts) and an interface for all of the dynamic behaviours. This specification has outlasted Itanium and remains the de-facto standard ABI for C++ everywhere except Windows (and AArch32, which has a few [documented] tweaks to the Itanium ABI). There are three widely used implementations of the run-time library (libsupc++, libcxxrt, libc++abi) and they can all support multiple compilers and standard-library implementations as a result of this clean layering in the design.

    Most of this has absolutely nothing to do with Itanium though, it was just the first time anyone had properly specified a C++ ABI and so it was the one that everyone adopted. Until that point, every C++ compiler invented its own ABI and didn't document it publicly so interoperability was very hard. This ABI was documented and GCC supported it, so any other compiler that implemented it could avoid the effort of designing its own and could guarantee C++ interop with at least one other compiler. Once a few compiler supported it (GCC, XLC, ICC, and so on) the incentives were strongly aligned with supporting the standard.

    --
    sudo mod me up
    • (Score: 2) by isj on Monday April 27 2020, @04:19PM

      by isj (5249) on Monday April 27 2020, @04:19PM (#987554) Homepage

      I agree that it could have been another CPU than Itanium that cause a common calling convention (+ other stuff) to be implemented. The SPARC CPU strangely didn't drive it. It has register windows so longjmp/setjmp would have to be different there too.

      Just so people know the mess of calling convention on x86 here is a list of calling conventions I have seen on x86

      • cdecl (arguments pushed last-to-first, caller clears stack)
      • pascal (arguments pushed first-to-last, callee clears stack)
      • stdcall (like pascal unless the function has variable number of arguments)
      • syscall (like cdecl but arugment size/count is passed in AL)
      • fastcall (first two arguments are passed in registers)
      • fastthis ('this' pointer is passed in registers)
      • optlink (first 3 arguments passed in registers, rest last-to-first as cdecl, except for floating point, et)