Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday April 23 2020, @12:33PM   Printer-friendly
from the Sorry-about-that-boss! dept.

Worst CPUs:

Today, we've decided to revisit some of the worst CPUs ever built. To make it on to this list, a CPU needed to be fundamentally broken, as opposed to simply being poorly positioned or slower than expected. The annals of history are already stuffed with mediocre products that didn't quite meet expectations but weren't truly bad.

Note: Plenty of people will bring up the Pentium FDIV bug here, but the reason we didn't include it is simple: Despite being an enormous marketing failure for Intel and a huge expense, the actual bug was tiny. It impacted no one who wasn't already doing scientific computing and the scale and scope of the problem in technical terms was never estimated to be much of anything. The incident is recalled today more for the disastrous way Intel handled it than for any overarching problem in the Pentium micro-architecture.

We also include a few dishonourable mentions. These chips may not be the worst of the worst, but they ran into serious problems or failed to address key market segments. With that, here's our list of the worst CPUs ever made.

  1. Intel Itanium
  2. Intel Pentium 4 (Prescott)
  3. AMD Bulldozer
  4. Cyrix 6×86
  5. Cyrix MediaGX
  6. Texas Instruments TMS9900

Which CPUs make up your list of Worst CPUs Ever Made?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1) 2
  • (Score: 5, Insightful) by lentilla on Thursday April 23 2020, @01:02PM (10 children)

    by lentilla (1770) on Thursday April 23 2020, @01:02PM (#985996)

    I'll vote for the Intel Pentium anyway - for exactly the reason mentioned in the summary - the FDIV bug [wikipedia.org] was handed in a monumentally terrible way. It was the Boeing 737 MAX of its day.

    Remember this is a company stuffed to the gills with some of smartest engineers that could be found. The exact kind of people that understand that very little is perfect, and that issues can be corrected or worked-around once known. This is the epitome of anti-ethical behaviour for an engineering firm.

    • (Score: -1, Flamebait) by Anonymous Coward on Thursday April 23 2020, @01:58PM (2 children)

      by Anonymous Coward on Thursday April 23 2020, @01:58PM (#986009)

      If someone were to mention that Intel is jewish and hence unworthy for humans, that message would be voted down.

      • (Score: 3, Insightful) by Anonymous Coward on Thursday April 23 2020, @02:04PM

        by Anonymous Coward on Thursday April 23 2020, @02:04PM (#986010)

        You neonazis are lame.

      • (Score: 1) by khallow on Thursday April 23 2020, @04:35PM

        by khallow (3766) Subscriber Badge on Thursday April 23 2020, @04:35PM (#986092) Journal
        Indeed. Why do you have a problem with that?
    • (Score: 4, Interesting) by epitaxial on Thursday April 23 2020, @03:46PM (2 children)

      by epitaxial (3165) on Thursday April 23 2020, @03:46PM (#986046)

      The error itself is pretty interesting. It's a typo in a lookup table in the microcode. So did someone have to transcribe the table by hand?

    • (Score: 2) by sjames on Thursday April 23 2020, @08:17PM

      by sjames (2882) on Thursday April 23 2020, @08:17PM (#986189) Journal

      I am Pentium of Borg, you will be approximated...

    • (Score: 2) by FatPhil on Thursday April 23 2020, @10:01PM

      by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Thursday April 23 2020, @10:01PM (#986224) Homepage
      Not that terrible, definitely not monumentally, they brought in arguably the best x86 programmer in the world to fix the problem, which he swiftly did.

      OK, they tried to pretend it didn't exist until Nicely made it public, but when they realised that some action was necessary (who cares about spreadsheets? nobody's gonna see a 1-in-a-billion bug, and the result's not going to be far out anyway!) they did act quickly. Not just a workaround, but replacements too. And who wanted replacements? Almost nobody, honestly, even those wronged thought it wasn't worth the effort to correct.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 1, Informative) by Anonymous Coward on Thursday April 23 2020, @10:32PM

      by Anonymous Coward on Thursday April 23 2020, @10:32PM (#986236)

      Didn't that also suffer from the F00F bug? That can halt the entire computer with a single unprivileged instruction requiring a hard reboot. Better hope you patched your multi-user systems, those with compilers, or those running untrusted code.

    • (Score: 2) by toddestan on Friday April 24 2020, @12:22AM

      by toddestan (4982) on Friday April 24 2020, @12:22AM (#986295)

      I'm surprised the original Socket 4 Pentium didn't make the list. Not only because of the FDIV issue which affected a lot of these early chips, but also because they ran hot. They were 5V chips throughout, and the current consumption was high. This lead to a quick redesign, and in less than a year Socket 4 was abandoned for Socket 5, with one of the big changes is the chips now ran at 3.3V and half the current consumption. The only chips ever released on Socket 4 were the Pentium 60 and the (somewhat rare) Pentium 66, leaving Socket 4 yet another short-lived Intel socket and a dead end in terms of upgradeability.

      A similar short-lived socket was the original socket for the Pentium 4, Socket 423. Launched with the original Pentium 4 1.4 GHz and saddled with Rambus memory, it was quickly abandoned for Socket 478 which could use SDRAM and later DDR memory. A version Pentium 4 2.0 GHz the fastest CPU available. Intel later released a (rare) 1.3 GHz Pentium for Socket 423 if you're really looking for a turkey. At least the Prescott either used Socket 478 or LGA775, both of which are relatively long lived Intel sockets. LGA775 Pentium 4's can often be swapped for LGA775 Core 2 processors for a significant upgrade.

  • (Score: 5, Informative) by SomeGuy on Thursday April 23 2020, @01:04PM (5 children)

    by SomeGuy (5632) on Thursday April 23 2020, @01:04PM (#985997)

    They mention the TMS9900 as a "bad CPU" without mentioning the TI-99/4a which probably had one of the worst architectures? Imagine running your video card over a parallel port - and that also doubles as your machines main memory. Your BIOS/OS is all written in a slow interpreted language. And all your actual software is implemented in an interpreter that runs on top of the BIOS/OS interpreter. Want to expand it? Well, now you have a long line of bulky side cars, if you bump them it is game over, and the smallest most fragile one must be connected first in the chain!

    The thing is, all of these CPUs "worked" and did what they were designed to do. If these CPUs were so horrible, they would not have sold at all. They gripe about the 64k limit of the 9900, but when that was new, 64k was a LOT, and it even had an actual 16-bit data bus (as mentioned the TI-99/4a crippled that - it was actually intended to use a different 8-bit CPU, and relates to why everything is interpreted). IBM actually didn't use the 8086 in the PC, they went with the 8088, which also had an 8-bit data bus. If they really wanted to dig up some bad CPU architectures, I'm sure there are plenty of long forgotten, mostly unused, ~1970s implementations. Of course, those would be much more technical than saying "it sucked because it was slow, and did not meet marketing hype".

    • (Score: 5, Informative) by TrentDavey on Thursday April 23 2020, @04:40PM (1 child)

      by TrentDavey (1526) on Thursday April 23 2020, @04:40PM (#986095)

      This. When I worked at Trent University (late 80's) our electronics course lab dealt with hands-on computer interfacing and architecture via the Timex Sinclair. In a bench row of set-ups if one person caused even a minor electrical spike/drop-out by even something as innocuous as turning their power supply on, the others would lose their stored programs - stored as basic command tokens in a REM statement no less. But I think that was an OK CPU. A Zilog something or other ?
      Good times, good times indeed.

      • (Score: 0) by Anonymous Coward on Friday April 24 2020, @07:22AM

        by Anonymous Coward on Friday April 24 2020, @07:22AM (#986413)

        Zilog Z-80, IIRC. I learned how to program on one of those dinosaurs. Good times.

    • (Score: 5, Insightful) by driverless on Thursday April 23 2020, @04:44PM (2 children)

      by driverless (4770) on Thursday April 23 2020, @04:44PM (#986098)

      In fact of their entire list I'd say only the 9900 was a really bad CPU, mostly because it was such a weird architecture and a royal pain to program. All of the rest were just "didn't really live up to expectations". If you want PITA CPUs, what about the i860, which was a giant minefield of exceptions and odd special cases, you needed to memorise a thousand-page manual just to use it effectively. Or the 1802, although that was an early 70s CPU so perhaps excusable, and at least with SOS it was rad-hard. And then there was the PIC, laughingly marketed as a "RISC CPU" because calling it a crippled piece of shit would have impacted sales.

      But by far the worst CPU I've ever encountered, dredged up from the depths of hell to torment developers everywhere, is the MSP 430. If you've never had to work with one of those then count yourself lucky. The fact that this list doesn't even mention the 430 indicates it's mostly just a list of complaining about Intel and AMD.

      • (Score: 0) by Anonymous Coward on Friday April 24 2020, @07:32AM (1 child)

        by Anonymous Coward on Friday April 24 2020, @07:32AM (#986416)

        The PIC series SOCs are actual RISC processors as RISC was originally intended. I always thought they were fun to work with for small projects. In fairness, that was all done in assembler. Coding for them in C would be a pain due to the ultra-minimalist design.

        • (Score: 2) by driverless on Friday April 24 2020, @11:21PM

          by driverless (4770) on Friday April 24 2020, @11:21PM (#986742)

          But they were never designed using the RISC philosophy, they predate Patterson and Sequin's work by several years and any mainstream acceptance of RISC by at least a decade. They were no more RISC than the 6502, from about the same time, was, the PIC was a bare-bones minimalist CPU that had the name "RISC" glued onto it for marketing purposes years later.

  • (Score: 3, Insightful) by drussell on Thursday April 23 2020, @01:05PM (7 children)

    by drussell (2678) on Thursday April 23 2020, @01:05PM (#985998) Journal

    The TMS9900 is actually a good processor.

    I guess they just think it's a "worst" CPU because it didn't see more widespread use?

    • (Score: 5, Informative) by RS3 on Thursday April 23 2020, @03:18PM (6 children)

      by RS3 (6367) on Thursday April 23 2020, @03:18PM (#986034)

      I know this is against the rules, but TFA is a good quick read. :)

      FTFA, TMS9900 was a failure due to 16-bit address bus. IBM chose 8088 because 20-bit. That is a big deal. RAM banking could be done, but that would have caused many programmers angst and lots of errors. Yeah, forget that.

      Also, there were no 16-bit peripherals. You could use 8-bit ones, but then you had an I/O bottleneck, and extra glue logic, so why not just use an 8-bit CPU.

      TI has always been an awesome company IMHO. TMS9900 was the first 16-bit microprocessor, so it's sad to see they didn't expand the address bus by the time IBM was working on the PC. Kudos to TI for the TMS320 series.

      I like that TMS9900 kept most registers in RAM. Greatly reduces context switching time. Not sure why some didn't like it.

      Here's a great article by one of the top TI people in the program:

      https://spectrum.ieee.org/tech-history/heroic-failures/the-inside-story-of-texas-instruments-biggest-blunder-the-tms9900-microprocessor [ieee.org]

      • (Score: 2, Funny) by shrewdsheep on Thursday April 23 2020, @03:53PM (1 child)

        by shrewdsheep (5215) on Thursday April 23 2020, @03:53PM (#986053)

        I know this is against the rules, but TFA is a good quick read. :)

        Please be considerate and keep your social distance like everyone else!

        • (Score: 4, Touché) by RS3 on Thursday April 23 2020, @04:16PM

          by RS3 (6367) on Thursday April 23 2020, @04:16PM (#986072)

          I was wearing protection!

      • (Score: 2) by maxwell demon on Thursday April 23 2020, @07:10PM (3 children)

        by maxwell demon (1608) on Thursday April 23 2020, @07:10PM (#986164) Journal

        Not sure why some didn't like it.

        My guess would be that the shorter context switching time was more than offset by longer time to do calculations. As far as I remember, main memory accesses have always been slower than CPU register accesses. Thus I would be surprised if the TMS9900 memory "registers" were not causing it to be much slower than chips with on-chip registers.

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by RS3 on Friday April 24 2020, @12:27AM (2 children)

          by RS3 (6367) on Friday April 24 2020, @12:27AM (#986299)

          Absolutely agree. After posting, I did some more reading, looked at some pinouts, saw the 3 (???) clock phase inputs, and remember how horribly slow memory access was in those days. To be fair, it's still many clock cycles, but we have DDR4, "quad pumped" bus, etc. I wonder if the TI engineers were hoping for faster RAM or something?

          Either way, it was a tradeoff between faster context switches versus faster register operations. We all know who won out!

          • (Score: 1, Insightful) by Anonymous Coward on Friday April 24 2020, @07:35AM (1 child)

            by Anonymous Coward on Friday April 24 2020, @07:35AM (#986418)

            More likely that their fab couldn't get the required transistor density to fit everything, so something had to give. CPUs that fit on a single chip were still a pretty new thing in those days.

            • (Score: 2) by RS3 on Friday April 24 2020, @03:08PM

              by RS3 (6367) on Friday April 24 2020, @03:08PM (#986496)

              Great point. And, especially in those days, chip yields (usable ICs) were low, and the bigger the die, statistically the fewer good ones you'll get. And they didn't have the PGA and 4-sided PLCC and BGA packages that they developed for the LSI in the 80s. So making bigger chips would have cost much more and nobody would have bought.

              Which brings up the memory that there were some multi-package microprocessors, and I forget which ones did that, but it was 2 or 3 chips, and they were not popular.

  • (Score: 3, Informative) by Anonymous Coward on Thursday April 23 2020, @01:15PM (10 children)

    by Anonymous Coward on Thursday April 23 2020, @01:15PM (#986000)

    That processor was great. A lot cheaper than Intel's processors and very fast integer performance. Sure the FPU was garbage and therefore performed poorly in Quake, but used in the appropriate application (eg. business desktops) it was excellent. I'd put the Pentium Pro above it on the list.

    • (Score: 2) by RS3 on Thursday April 23 2020, @03:21PM (3 children)

      by RS3 (6367) on Thursday April 23 2020, @03:21PM (#986036)

      I liked that they were very low power. But the Intel P3 was pretty low power too. Besides what you mentioned, they didn't generally work in socket 370 motherboards- you had to have an MB that accepted them. But otherwise I had no problems with them.

      • (Score: 3, Interesting) by shortscreen on Thursday April 23 2020, @06:00PM (2 children)

        by shortscreen (2252) on Thursday April 23 2020, @06:00PM (#986140) Journal

        6x86 was a socket 5/7 processor, not 370. The only real problem with it was that even after multiple die shrinks the clock speed didn't increase much. They managed to get from the original 150MHz up to 300MHz and that was it. AMD and Intel hit 600MHz and beyond and left Cyrix in the dust. The Joshua core that Cyrix had been working on at the time of the VIA buyout was also looking like a big, expensive die with lackluster frequencies.

        • (Score: 2) by RS3 on Friday April 24 2020, @12:53AM (1 child)

          by RS3 (6367) on Friday April 24 2020, @12:53AM (#986311)

          Thanks for the correction and info. Sorry, I guess I mixed the Cyrix memories into 1. I was remembering the most recent Cyrix III / Via C3s that were socket 370 and I had a couple of them, and remember buying special socket 370 MBs to run them. Ran very well, quite cool. Probably didn't need a fan.

          I still have at least 1 C3 1 GHz cpu. Other one must be in an MB somewhere in a box...

          I just dug into my CPU history museum box and found a 6x68MX, an mII, and 2 labeled IBM, because, iirc, IBM was one of their fabs.

          • (Score: 1, Interesting) by Anonymous Coward on Friday April 24 2020, @06:24PM

            by Anonymous Coward on Friday April 24 2020, @06:24PM (#986624)

            And the actual Cyrix stuff got sold to someone else, because the AMRISC/RDC 3286 (There were a couple different names for it depending on the mode) was a Cyrix 486SLC-133 which may or may not have been related to the 6x86 cores, which I will note were Speculative execution 2+ years before Intel put out the Pentium Pro. In fact the biggest issue with the 6x86 was because they were microcoded speculative execution chips. A sequence of 2-3 instructions could actually put the chip in an infinite loop because it would mask interrupts. this sequence was possible in userspace which could cause an infinite loop if a workaround wasn't implemented. Apparently a page fault was enough so using a 2 page interrupt table was a method of mitigation.

            The bigger issue with the 6x86 was the lack of a timestamp counter and a few other items needed for full pentium compatibility, and the fact that the FPU in it was weaker than the Pentium FPU because they had focused the transistor budget on integer processing because at the time it seemed like general purpose cpus focused on integer use was where it was at. Unfortunately Pentium and Doom proved them wrong, followed shortly thereafter by the Pentium MMX leading to the merger of the PPro and the P55C's MMX extensions into the Celeron/P2/Xeon lines. And the rest as they say is history.

    • (Score: 3, Insightful) by Revek on Thursday April 23 2020, @11:01PM (2 children)

      by Revek (5022) on Thursday April 23 2020, @11:01PM (#986253)

      They were trash but not because they were bad processors. The main reason why is they blue screened in windows often since windows didn't have any optimization for the processor. It was already a intel world as far as microsoft was concerned.

      --
      This page was generated by a Swarm of Roaming Elephants
      • (Score: 3, Interesting) by Anonymous Coward on Friday April 24 2020, @07:41AM (1 child)

        by Anonymous Coward on Friday April 24 2020, @07:41AM (#986419)

        IIRC, it was worse than that. Around that time if Windows detected a Cyrix chip it would turn off the cache to slow the machine to a crawl and then randomly throw fake blue-screens. That was one of the many charges brought against Microsoft during the big antitrust suit against them. They pulled a similar stunt with the Windows 3x series if it was run on DR-DOS.

        • (Score: 2) by RS3 on Saturday April 25 2020, @01:37AM

          by RS3 (6367) on Saturday April 25 2020, @01:37AM (#986796)

          "fake blue-screens" - I actually laughed out loud.

          They did that with Novell client software. And cleverly the MS Novell-detecting-blue-screening code was encrypted. It's amazing the criminal activity that some people get away with.

          Question: how can you tell a fake one from a real one?

          As my dad would have said, I think it's a dubious distinction.

    • (Score: 2) by toddestan on Thursday April 23 2020, @11:56PM

      by toddestan (4982) on Thursday April 23 2020, @11:56PM (#986281)

      I had one of those. It was rated a 200+, so supposedly equivalent if not faster than the (somewhat rare) Pentium 200. It actually ran at 150 MHz. For general desktop type stuff, it was fine. It was also okay with the internet as it was back in this CPU's day. However, the FPU was terrible. It could barely play a MP3 file. Well, you could play a MP3, but forget about doing much else with the PC while it was doing it. I also used a K6 233 quite a bit, and the K6 utterly destroyed the 6x86. The K6 could play a MP3 and pretty much still be idling. Ditto for games.

      It also wasn't very stable, though for the time it wasn't bad considering I was running Windows 95 and most of the crashes weren't likely the hardware's fault. But nevertheless a big part of the problem was that it was a 150 MHz chip, but unlike the Intel chips which got that with a 60 MHz bus and a 2.5x multiplier, the 6x86 got it with a 75 MHz bus and a 2x multiplier. While this undoubtedly helped it with it's 200+ rating, this hurt its stability quite a bit. I had a Socket 7 motherboard with an Intel VX chipset, which was only rated for 66 MHz (the fastest bus speed Intel ever supported on Socket 7) which meant the chipset was over clocked. The PCI bus ran at half the bus speed, or 37.5 MHz, whereas most any PCI card was expecting 33 MHz max, so all the PCI cards were being overclocked. And the EDO memory was almost certainly being run at a higher speed than it was supposed to. The 6x86 166+ ran at 133 Mhz (66 MHz and 2x, same as the Pentium 133), so I would guess that chip might be more stable.

      The 6x86 might fair better in the later Super Socket 7 motherboards which supported up to a 100 MHz bus speed and were backwards compatible with some Socket 7 processors, though those came out after the original 6x86.

      Despite its problems, I still don't know if I would call it bad. It was cheap, and it worked well enough for the time.

    • (Score: 3, Interesting) by epitaxial on Friday April 24 2020, @01:58AM

      by epitaxial (3165) on Friday April 24 2020, @01:58AM (#986344)

      The FPU wasn't even that bad in the benchmarks. https://liam-on-linux.livejournal.com/49259.html [livejournal.com]

      Quake used very cleverly optimised x86 code that interleaved FPU and integer instructions, as John Carmack had worked out that apart from instruction loading, which used the same registers, FPU and integer operations used different parts of the Pentium core and could effectively be overlapped. This nearly doubled the speed of FPU-intensive parts of the game's code.

       

    • (Score: 2) by TheRaven on Friday April 24 2020, @03:13PM

      by TheRaven (270) on Friday April 24 2020, @03:13PM (#986502) Journal
      I had a 6x86 and was very happy with it. The P-rating thing was really bad expectation management though. The P166+ that I had ran at 133MHz and outperformed a P133 at pretty much any workload I tried. It was cheaper than a P133. If Cyrix had sold it as a 133MHz part, it would have been great. I actually ran mine overclocked to 166MHz. I eventually replaced it with a K6-2.
      --
      sudo mod me up
  • (Score: 2) by engblom on Thursday April 23 2020, @01:33PM (12 children)

    by engblom (556) on Thursday April 23 2020, @01:33PM (#986004)

    The first generation of Ryzen Zen was really terrible. When running Linux they crashed more or less daily, making them totally worthless. This was fixed in Zen+ and newer.

    • (Score: 2) by bzipitidoo on Thursday April 23 2020, @02:06PM (7 children)

      by bzipitidoo (4388) on Thursday April 23 2020, @02:06PM (#986011) Journal

      I never had any luck with the AMD K6 and other AMD CPUs of that era. Just not stable. Seemed the chips themselves were okay, but I kept running into motherboards that would run a Pentium fine, but not an AMD K5. They were supposed to work with either, but they wouldn't work long with an AMD chip before hanging, while being rock steady with an Intel CPU.

      • (Score: 5, Interesting) by RS3 on Thursday April 23 2020, @03:26PM (2 children)

        by RS3 (6367) on Thursday April 23 2020, @03:26PM (#986038)

        I also had some stability problems with them, but I blamed Windoze. Turned out to be MB capacitors! Changed them when they finally started oozing and the cap plague became widely known. Rock-solid after that.

        Interestingly, that MB ran Linux fine, which is why I blamed Windows. (dual-boot machine of course). What was Windows doing that caused instability? I can't imagine MS would mess with RAM or other MB timings. Does anyone have any ideas on that?

        • (Score: 1, Insightful) by Anonymous Coward on Thursday April 23 2020, @09:00PM (1 child)

          by Anonymous Coward on Thursday April 23 2020, @09:00PM (#986207)

          Usage patterns, man. With bad caps, doing work under load and under low load can give different outcomes. So maybe peripherals were brought up in a different order, or maybe bootup stressed the bridges differently, or maybe some init or usage pattern was actually identical to both but had a stochastic failure chance, at which point "error handling in software"

          • (Score: 2) by RS3 on Saturday April 25 2020, @01:42AM

            by RS3 (6367) on Saturday April 25 2020, @01:42AM (#986798)

            Yeah, maybe. But running Linux I ran X-windows, KDE, kstars, played videos including YouTube. Never a glitch.

            But Windows would freeze, reboot, blue-screen...

            Again, after new caps, Windows never flinched. No sw changes either. Just caps. I have to believe Windows messes with hardware stuff.

            Maybe Linux was loading a better CPU microcode?

      • (Score: 3, Interesting) by Anonymous Coward on Thursday April 23 2020, @04:28PM (3 children)

        by Anonymous Coward on Thursday April 23 2020, @04:28PM (#986083)

        I had a k6-2 and it was rock stable
        It could not overclock it as the pentium and maybe it is why several people had problems trying to push it too hard, specially with weaker MB

        whatever CPU, most problems at that time were bad power supply/MB (ie: stable power problems) and bad windows drivers (very common).

        • (Score: 2) by turgid on Thursday April 23 2020, @04:44PM

          by turgid (4318) Subscriber Badge on Thursday April 23 2020, @04:44PM (#986097) Journal

          I had two K6-2s and a K6-III and they were all rock solid. I did have trouble with one but that was bad RAM, not the CPU.

        • (Score: 2) by Booga1 on Thursday April 23 2020, @05:03PM

          by Booga1 (6333) on Thursday April 23 2020, @05:03PM (#986108)

          I also had a K6-2 and had no issues whatsoever. I did spring for a decent motherboard since I needed extra ports for Firewire and such.

        • (Score: 2) by TheRaven on Friday April 24 2020, @03:22PM

          by TheRaven (270) on Friday April 24 2020, @03:22PM (#986504) Journal

          I had a K6-2 and it crashed a lot. It turned out, there was a hairline crack on the motherboard. When I replaced the mother board it was very stable.

          These chips also didn't do any thermal throttling and a lot of them had poorly fitted heat sinks (especially bad thermal paste), which caused them to overheat.

          --
          sudo mod me up
    • (Score: 1) by zion-fueled on Thursday April 23 2020, @02:36PM (1 child)

      by zion-fueled (8646) on Thursday April 23 2020, @02:36PM (#986013)

      Was it? Because mine is running great. Then again I got it this year, long after linux support matured. Vega must be terrible too because I read about it's linux trouble recently.

      • (Score: 0) by Anonymous Coward on Thursday April 23 2020, @07:53PM

        by Anonymous Coward on Thursday April 23 2020, @07:53PM (#986184)

        Did you read recently Gilead Sciences (your home company and pride) has had its coronavirus drug failed in the first trial?

    • (Score: 2) by Freeman on Thursday April 23 2020, @02:53PM

      by Freeman (732) on Thursday April 23 2020, @02:53PM (#986021) Journal

      Ryzen Zen and/or my motherboard MSI B350 Tomahawk was buggy as all get out when I first got it. I picked up a Ryzen 7 1700 right at release date and I had all kinds of trouble with blue screens of death and other issues in general. After a few BIOS updates, everything settled down and it's been solid for years.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 1, Interesting) by Anonymous Coward on Friday April 24 2020, @06:41PM

      by Anonymous Coward on Friday April 24 2020, @06:41PM (#986631)

      A masking error on the machine learning instruction scheduler. It was functionally flawed.

      The solution for motherboards that supported it was to disable that part, which netted a 10-20 percent performance loss, but many motherboards didn't include the option, or send your cpu back to AMD who would exchange it for one of the fixed models from after the mask error was detected. No Zen+'s had that error, but neither did any later revision Zen cores. Only the original first or second masking runs had the flaw, which initially only showed up on gcc 8.3 which was how it was discovered. Apparently windows and the microsoft compiler had a different optimizing pattern that didn't trigger it at the time, as did the earlier gcc releases.

  • (Score: 5, Informative) by bzipitidoo on Thursday April 23 2020, @01:37PM (6 children)

    by bzipitidoo (4388) on Thursday April 23 2020, @01:37PM (#986006) Journal

    I'd rate Intel's 386SX and 486SX as pretty bad. They were lobotimized versions of the 386DX and 486DX, for purposes of filling a perceived low end market niche.

    With the 486DX, Intel finally got rid of another of their marketing ploys, the splitting out of the floating point math, in order to sell that separately for a whole lot more money, 8087 to go with the 8086, 80287 to go with the 80286. etc. Only, almost no one bought them. You could get a little emulator software package, good enough to fool AutoCAD which refused to run if there was no math coprocessor. Yeah, emulating an 8087 on an 8086 was about 1/50th the speed, but it saved you over $500. I remember there was a one week period in which the price of the 80387 collapsed. Started the week at $600 (with the Cyrix 387 going for $500), and ended the week at $200, and they still weren't worth buying. Everyone knew the 486 was coming soon. Then what does Intel do? Try one more time to extort people for floating point math in hardware, by removing it from the 486DX and calling that a 486SX. You could add that back in by buying an 80387.

    The 386SX was a different cut. No 386 had floating point math. They all needed a 387 for that. Instead, one of the selling points of the 386 was full 32bit -- in the 386DX. The 386SX backtracked on that. Still had all the 32bit functionality, but the bus was only 16bit. The SX alternated between sending out the low and the high half of a word in order to squeeze 32bit addresses and memory fetches onto that 16bit bus. That's what the 386SX was, a 386DX running at half the speed whenever any use of the bus was needed.

    • (Score: 3, Interesting) by RS3 on Thursday April 23 2020, @04:22PM (2 children)

      by RS3 (6367) on Thursday April 23 2020, @04:22PM (#986079)

      My first '386 board came with a 287. It surprised me at the time- I thought it had to be a 387 to go with a 386, but evidently 287 is bus compatible.

      Q: was the 386SX pretty much bus compatible with 286 glue logic? IE, could you pretty much drop a 386SX into a 286 MB design?

      • (Score: 0) by Anonymous Coward on Friday April 24 2020, @06:53PM (1 child)

        by Anonymous Coward on Friday April 24 2020, @06:53PM (#986640)

        The entire point of the 386SX, unlike the later 486SX was that it was bus compatible with the 286, allowing the clone box manufacturers to reuse existing 286 chipset designs with the 386 processor.

        I know this because I have a 1989 Gateway 2000 386SX/16 sitting above me in storage. The motherboard got swapped out over the years, but as originally sold it had a 250W industrial power supply with a giant mechanical switch out the side. A full height (10.5"?) 60 megabyte MFM hard disk with a paper bad sector list in a pouch atop it, 4 megabytes of ram on either 2x2MB or 4x1MB SIMMs (had either 4 or 8 simm sockets, I forget which.) with a full length MFM/SERIAL/Parallel I/O card and a I think half length (only reached to the end of the motherboard, not the full length guides on the inner front face of the case.) The video card was an ATI VGA/EGA Combo card, maybe a Wonder? It had no audio onboard except the old style PC speaker which could do some really low fidelity mono-channel modulated sound, but usually was only used for different tones of beep. No Sound card. I eventually got a Thunderboard (8 bit Sound Blaster Clone) which netted me a game port, and later some secondhand ethernet cards for it. Managed to run windows 3.11 and later WFW 3.11 on it. Played Space Quest III and then IV (barely, got a 486 around that time that most of that game was played on.) all kinds of shareware games, my first internal 2400 baud modem, and a variety of other fun stuff after.

        The soldered on-board battery eventually died on it and after jumpering it to get the system to boot I think the RTC blew up. Either that or when I tried to swap the memory back I used the wrong speed simms. One of these days I will examine it and see if I can get it booting again. For the record that 250W power supply still runs good and hard, but I have had more compact cases to use in the years since. That original was about the footprint of a 26-32" TV and took up most of a wide desk or table. But it was heavy enough to handle a large monitor set atop it :)

        • (Score: 2) by RS3 on Saturday April 25 2020, @02:07AM

          by RS3 (6367) on Saturday April 25 2020, @02:07AM (#986802)

          Soldered-on battery one of those 3-cell shrink-wrapped ones? If so, likely a NiCad and might be leaking. If so, it's usually fairly easily cleaned. I use warm water, dish detergent, and an old toothbrush. Dry it well of course. There may be a connector for an external battery. RTC might be fairly easily replaced.

    • (Score: 2, Informative) by Anonymous Coward on Thursday April 23 2020, @04:25PM (1 child)

      by Anonymous Coward on Thursday April 23 2020, @04:25PM (#986080)

      by removing it from the 486DX and calling that a 486SX. You could add that back in by buying an 80387.

      Not quite... you could not use the 80387 with the 486SX... you had to buy an "80487SX" which was literally just a full-featured 486DX chip packaged with an extra pin to prevent it from fitting in the same sockets.

      When you added the 487SX the original 486SX is simply disabled and all processing is done on the "co-processor".

      To be fair to Intel they almost certainly had a run of bad FPUs and this seems like a reasonably clever way to market the broken chips instead of scrapping them (just need to design the "upgradable" motherboard -- back in the early 90s this was probably not a huge ask because motherboards were fairly simple designs at the time). Obviously this proved to be fairly successful because they respun the 486SX several times (the later tapeouts would have the FPU removed instead of broken).

      • (Score: 2) by toddestan on Friday April 24 2020, @12:07AM

        by toddestan (4982) on Friday April 24 2020, @12:07AM (#986286)

        There was even a 486SX2, which was exactly what it sounded like - a 486DX2 with the FPU removed. I'm not sure if they were actually 486DX2's with a bad/disabled FPU, or they never had a FPU from the start. They were pretty rare, and I only remember seeing them in low end OEM systems like Packard Bell, possibly suggesting that they were actually 486DX2's with bad FPU's, and their rarity coming from Intel getting better yields later in the 486's run. As far as I'm aware there was never a 486SX4.

    • (Score: 2) by TheRaven on Friday April 24 2020, @03:38PM

      by TheRaven (270) on Friday April 24 2020, @03:38PM (#986508) Journal

      They were lobotimized versions of the 386DX and 486DX, for purposes of filling a perceived low end market niche.

      The 386SX was lobotomised so that it could work with cheaper motherboards. It had a 16-bit external bus, significantly reducing the cost of motherboards and a bunch of other on-board peripherals.

      The 486SX was intended to help with yields. Intel's 486 yields were quite low. Any that had broken FPUs could be sold as 486SX chips, any with working FPUs but broken 486 cores could be sold as 487s. As the yields increased, 486SX and 487 parts ended up just having the other parts disabled. The later 487s were actually 486DXs that took over from the original 486SX entirely.

      --
      sudo mod me up
  • (Score: 2, Interesting) by agr on Thursday April 23 2020, @01:38PM (4 children)

    by agr (7134) on Thursday April 23 2020, @01:38PM (#986008)

    The CPU used a plugboard for some of its instructions.

    • (Score: 2) by epitaxial on Thursday April 23 2020, @03:49PM (3 children)

      by epitaxial (3165) on Thursday April 23 2020, @03:49PM (#986049)

      Well it was 1956 in all fairness. Vacuum tubes were the current technology. The germanium semiconductor was only discovered 9 years previous.

      • (Score: 2) by RS3 on Thursday April 23 2020, @04:29PM

        by RS3 (6367) on Thursday April 23 2020, @04:29PM (#986085)

        Was that when they first used the epitaxial process? ;-)

      • (Score: 1) by agr on Thursday April 23 2020, @05:57PM (1 child)

        by agr (7134) on Thursday April 23 2020, @05:57PM (#986139)

        There were othe vacuum tube machines of the same era, such as the IBM 650, that had a decent design.

        • (Score: 2) by RS3 on Friday April 24 2020, @01:01AM

          by RS3 (6367) on Friday April 24 2020, @01:01AM (#986315)

          Kept the room nice and warm too.

  • (Score: 0) by Anonymous Coward on Thursday April 23 2020, @03:52PM (3 children)

    by Anonymous Coward on Thursday April 23 2020, @03:52PM (#986052)

    I remember when Celerons hit the 300MHz and PII was unaffordable, some small computer companies released PCs with 300MHz Pentium I. I had a few of these in my hands, they were made of a copperish laminate as heat spreader. These CPUs required a large heatsink, specific higher voltages and mainboard allowing to bump the FSB.
    What was the problem, they were totally unstable. It was impossible to perform a scientific computations which stressed CPU and RAM for more than a few hours because it just froze and hard reset was required. And it was not much faster than 233MHz unit.
    I don't see this CPU on Intel listings, so I think it had to be something other branded as Intel. Anyone had this CPU?

    • (Score: 0) by Anonymous Coward on Friday April 24 2020, @01:14AM (2 children)

      by Anonymous Coward on Friday April 24 2020, @01:14AM (#986323)

      I'd forgotten about them, t he Celerons were one of the dumbest ideas ever and only existed so that Intel could extract as much money as possible from their overpriced chips. Truly, horrible.

      • (Score: 2) by epitaxial on Friday April 24 2020, @02:03AM

        by epitaxial (3165) on Friday April 24 2020, @02:03AM (#986345)

        Are you high or something? Celerons were cheap and could be overclocked easily without even tweaking voltages. I had an Abit motherboard that allowed dual Celerons when Intel claimed you couldn't do that. They were rated for 333mhz but you bumped up the FSB to 100mhz and they ran solid at 500mhz. I had that box running for years.

      • (Score: 4, Interesting) by TheRaven on Friday April 24 2020, @03:57PM

        by TheRaven (270) on Friday April 24 2020, @03:57PM (#986520) Journal
        The PII Celerons were pretty good. They had smaller L2 caches than the PIIs, but the caches ran at the same speed as CPU, whereas the PII caches were half the speed. For some things, this was actually faster.

        For extra fun, the 300MHz PII Celerons ran with a 66MHz FSB and a fixed clock multiplier. With some modest improvements to the cooling, they ran at 450MHz with a 100MHz. The BP6 [wikipedia.org] motherboard let you run two in an SMP configuration. A dual-processor 450MHz PII Celeron (overclocked from 300MHz) was £300 for the board and processors, which was incredibly cheap (cheaper than a single PII, for just the CPU).

        --
        sudo mod me up
  • (Score: 5, Interesting) by isj on Thursday April 23 2020, @04:04PM (9 children)

    by isj (5249) on Thursday April 23 2020, @04:04PM (#986064) Homepage

    They all taught us some lessons, and some of them drove compiler development.

    The Itanium had some interesting features:

    • instructions were bundled into 3 and predicated (bits to select execution or cancellation)
    • 128 registers
    • register windows, explicitly controlled
    • branch registers which could be loaded ahead of time giving the CPU hints about where it should prefect code from

    Intel underestimated the complexity of implementing such a beast, and also underestimated the time required to make compilers produce good code. But the idea of predicated bundles of instructions is pretty neat. It also drove the invention of a better calling convention (ever heard about the Itanuim ABI?). If the shift to 64-bit had been done more gradually then we might have been stuck with sub-optimal calling convention now.

    • (Score: 5, Interesting) by KilroySmith on Thursday April 23 2020, @04:46PM (3 children)

      by KilroySmith (2113) on Thursday April 23 2020, @04:46PM (#986099)

      The concept of the Itanium was great; still is. I never used one, so I don't know about the quality of the hardware implementation.

      The Itanium's downfall is the piss-poor state of software engineering. They couldn't write a compiler that demonstrated the Itanium working faster than a contemporary x86, so potential customers couldn't be bothered with the new architecture.

      In a modern x86, a significant amount of die space is devoted to solving parallelism and superscalar problems. The intention of the Itanium was to solve those problems at compile time, freeing up silicon area for other functions - perhaps more cores, or more cache, or more features (AVX-512, anyone?). It seems like a simple problem - after all, the hardware designers do a reasonable job of it in current x86 architectures - but for "reasons" software was never able to deliver the same level of parallelism.

      • (Score: 2) by isj on Thursday April 23 2020, @05:03PM

        by isj (5249) on Thursday April 23 2020, @05:03PM (#986110) Homepage

        I've used Itaniums at my work. They performed fine for our workloads.

        Yes, modern x86 CPUs use a lot of die space untangling the mess of register re-use and OoO execution. The "belt" architecture from Mill Computing tries to avoid that - it is an interesting architecture. I hope they succeed so we have more fun architectures to play with instead of micro-improvements of x86.

      • (Score: 4, Insightful) by sjames on Thursday April 23 2020, @08:02PM

        by sjames (2882) on Thursday April 23 2020, @08:02PM (#986186) Journal

        Part of that was Intel not wanting to share enough information for anyone but Intel to make a decent compiler. I say only part because Intel wasn't able to make their compiler produce fast code for Itanic either.

        Adding insult to injury, Itanic was eye-wateringly expensive (about $10,000 each IIRC) but didn't have the performance to back it up.

        I wouldn't blame it all on the software guys, unlike the hardware, the compiler didn't have the advantage of seeing what branches have already been taken by the actual code with the actual data. To get that information, the compiler would need an emulator and a representative input dataset.

      • (Score: 2) by epitaxial on Friday April 24 2020, @02:05AM

        by epitaxial (3165) on Friday April 24 2020, @02:05AM (#986349)

        I always wanted an Itanium box but they are still expensive on eBay. The cheapest would be the HP rx2660 or similar. Run the latest version of OpenVMS!

    • (Score: 2) by Bot on Thursday April 23 2020, @07:44PM (2 children)

      by Bot (3902) on Thursday April 23 2020, @07:44PM (#986179) Journal

      >the Itanuim ABI?

      >Itanuim

      I guess that such an ABI has problems with maintaining endianness :D

      --
      Account abandoned.
      • (Score: 3, Interesting) by isj on Thursday April 23 2020, @07:47PM (1 child)

        by isj (5249) on Thursday April 23 2020, @07:47PM (#986181) Homepage

        Close enough.

        BTW: the Itanium was bi-endian. big-endian when running HP-UX and little-endian when running Linux.

        • (Score: 2) by Bot on Friday April 24 2020, @03:52PM

          by Bot (3902) on Friday April 24 2020, @03:52PM (#986516) Journal

          Yeh I know one motherboard who went to bed with a Bi curious. Horrible experience.

          --
          Account abandoned.
    • (Score: 3, Informative) by TheRaven on Monday April 27 2020, @08:57AM (1 child)

      by TheRaven (270) on Monday April 27 2020, @08:57AM (#987483) Journal

      It also drove the invention of a better calling convention (ever heard about the Itanuim ABI?).

      When people who aren't historians talk about the Itanium ABI, they typically mean the Itanium C++ ABI. This had absolutely nothing to do with calling conventions. On Itanium, you could not implement setjmp and longjmp the way that most architectures do it. As a result, HP defined a standard for stack unwinding using DWARF unwind tables and a layered set of APIs that provided low-level abstractions and language-specific parts. This was essential for C++ exceptions to work on Itanium. As a side effect, they also specified a load of other bits of C++ (vtable layouts, class layouts) and an interface for all of the dynamic behaviours. This specification has outlasted Itanium and remains the de-facto standard ABI for C++ everywhere except Windows (and AArch32, which has a few [documented] tweaks to the Itanium ABI). There are three widely used implementations of the run-time library (libsupc++, libcxxrt, libc++abi) and they can all support multiple compilers and standard-library implementations as a result of this clean layering in the design.

      Most of this has absolutely nothing to do with Itanium though, it was just the first time anyone had properly specified a C++ ABI and so it was the one that everyone adopted. Until that point, every C++ compiler invented its own ABI and didn't document it publicly so interoperability was very hard. This ABI was documented and GCC supported it, so any other compiler that implemented it could avoid the effort of designing its own and could guarantee C++ interop with at least one other compiler. Once a few compiler supported it (GCC, XLC, ICC, and so on) the incentives were strongly aligned with supporting the standard.

      --
      sudo mod me up
      • (Score: 2) by isj on Monday April 27 2020, @04:19PM

        by isj (5249) on Monday April 27 2020, @04:19PM (#987554) Homepage

        I agree that it could have been another CPU than Itanium that cause a common calling convention (+ other stuff) to be implemented. The SPARC CPU strangely didn't drive it. It has register windows so longjmp/setjmp would have to be different there too.

        Just so people know the mess of calling convention on x86 here is a list of calling conventions I have seen on x86

        • cdecl (arguments pushed last-to-first, caller clears stack)
        • pascal (arguments pushed first-to-last, callee clears stack)
        • stdcall (like pascal unless the function has variable number of arguments)
        • syscall (like cdecl but arugment size/count is passed in AL)
        • fastcall (first two arguments are passed in registers)
        • fastthis ('this' pointer is passed in registers)
        • optlink (first 3 arguments passed in registers, rest last-to-first as cdecl, except for floating point, et)
  • (Score: 0) by Anonymous Coward on Thursday April 23 2020, @04:31PM (13 children)

    by Anonymous Coward on Thursday April 23 2020, @04:31PM (#986088)

    Intel had the 8086/8088 pc market, but the market wanted more memory space.

    So they built a new 80286 cpu with an mmu to provide more memory but unfortunately in a manner incompatible with existing PC s/w.

    The 386 had a whole different, but compatible mmu.

    • (Score: 2) by isj on Thursday April 23 2020, @04:52PM (2 children)

      by isj (5249) on Thursday April 23 2020, @04:52PM (#986102) Homepage

      Please elaborate.

      The 8086/8088/80186/80188 had 16-bit segment:offset addressing and "all" programs assumed that the segments were overlapping with 16 bytes offset. There were no way out of that without either breaking compatibility, or virtualizing memory. The 80286 introduced protected mode where the segments were treated as selectors and memory could be moved around and a program could address a whopping 16MB. I don't think virtualizing the whole thing was feasible at that time (1982) due to complexity.

      • (Score: 1, Informative) by Anonymous Coward on Thursday April 23 2020, @08:36PM (1 child)

        by Anonymous Coward on Thursday April 23 2020, @08:36PM (#986200)

        >Please elaborate.

        At the time, most code needing more than 64k did arithmetic assuming the segment regs were 16x the offset regs. That gave 1M - what IBM took leaving 640k.

        Intel decided that this was evil and that they 'owned' the architectural definition of the segment registers. They made the 286 with a 'beautiful' architecture which existing compiled code could not use. About the only folks happy about this was Mot with the 68k.

        I remember when Intel came by to try to sell their 286 in the early 80's. As soon as they described the MMU, we had a talk about compatibility and they acted as if they had never considered it as a necessity. They were all proud of their beautiful architecture for segments. They had spent extra gates making something that the PC could market could not use. There were plenty of examples of mmu solutions in the minicomputer world and they just blew it. They learned the importance of compatibility and seemed to be more careful in the 386 and even to now.

        The PS-AT and clones were about all that used the part. There were Unix variants that used the MMU, but to get back to DOS mode required a reset to the chip. (I think the keyboard process might have been involved.) Really a sad story driven by architectural arrogance causing them to ignore how their parts were being used.

        Worst ever is a really high bar. Again, I nominate the 286. There are plenty of examples of processor designs that turned out bad, but this one is special because it started out ok and had a big market share and then went bad on purpose and almost lost the share. A really special case.

        • (Score: 2) by TheRaven on Monday April 27 2020, @09:04AM

          by TheRaven (270) on Monday April 27 2020, @09:04AM (#987484) Journal
          The 286 was designed for compatibility. You could set up both the GDT and LDT to provide a contiguous 1MiB window into your 16MiB address space (each descriptor entry providing a 64KiB segment that was 16 bytes offset from the previous one). DOS .COM binaries were restricted to a single segment value and so could be relocated anywhere in the 16MiB address space (and protected). It wasn't the compatibility mode that killed the 286, it was largely a combination of two factors. The first was that the only way of returning from protected mode was to do a reset via the keyboard controller (which was *very* slow). The second was that for non-compatible uses the segment table was very clunky. The 386 cleaned this up by introducing VM86 mode, where you got a virtual 8086 machine, distinct from the 32-bit address space model. If anything, this was worse backwards compatibility because the 8086-compatible and 386-enhanced modes were completely separate worlds.
          --
          sudo mod me up
    • (Score: 3, Interesting) by Acabatag on Thursday April 23 2020, @05:08PM

      by Acabatag (2885) on Thursday April 23 2020, @05:08PM (#986112)

      There were proprietary unix boxes that made proper use of the 80286

      But the PC clone market barreled forward just using it as a bigger and faster 8088.

      I am disappointed in the dominance of x86 processors in the list in the article. What a boring subset of cpus.

      No mention of things like the Intersil/Harris 6100, a 12 bit processor that implemented the PDP-8 instruction set, and was done all in static CMOS so it could be clocked town to zero hertz for debugging if you wished.

      PC clones are dull.

    • (Score: 2) by RS3 on Thursday April 23 2020, @05:54PM (8 children)

      by RS3 (6367) on Thursday April 23 2020, @05:54PM (#986138)

      > So they built a new 80286 cpu with an mmu to provide more memory but unfortunately in a manner incompatible with existing PC s/w.

      I'm not following you. By "existing PC s/w" you mean stuff that ran in "real mode", pretty much on DOS, right? Because there was no MMU in the 86/88, therefor there was no MMU s/w in DOS or the apps.

      The 286 booted up in real mode, and as far as I know, and having used both for years, I never had any problems running any DOS software.

      What I remember of the 286 was its MMU was clumsy, and frankly the IBM AT hw didn't properly handle Intel's design concept. For instance, to go from protected mode back to real mode, the 286 needed reset pin toggled. But that's just doing some internal CPU stuff, and an unconditional jump to the reset vector (address). Main RAM and the rest of the system didn't need to be hard reset (power-up wiped). But IBM hard-wired it to be a "wipe everything, full system reboot". Admittedly Intel's concept was clunky, but could have been done.

      IIRC, NMI was something similar- not handled by IBM the way Intel had intended. IBM used it for a RAM parity error flag, but Intel had some other kind of things in mind. IIRC...

      And the 286 didn't handle nested exceptions well. But I didn't really get into that level of programming much. I had 1 system running Xenix 286 and it ran well.

      386 came out soon after and it was a whole new level of awesomeness, and we're still pretty much on that architecture, extended of course.

      • (Score: 5, Informative) by bzipitidoo on Thursday April 23 2020, @07:10PM (7 children)

        by bzipitidoo (4388) on Thursday April 23 2020, @07:10PM (#986163) Journal

        An appendix in an old computer architecture textbook I studied rips the x86 for being a horrible architecture. The x86 part is a fairly sane and common load and execute, execute and store design. Still too many instructions though. Not enough registers, and too much overly specific register functionality. For instance, integer multiplication and division operations all have to go through one register, AL (or AX or EAX, of which AL is the low 8 bits). It's a huge bottleneck for multiplication intensive work, having to shuffle everything in and out of just one register. Then, if you thought the later FDIV bug was bad, how about the embarrassment of the integer DIV operation being so slow that it was often faster to do division with several shifts and add/subtract operations?

        The instruction set has a lot of 1970s and 1960s cruft. Like, there are several instructions for working with packed decimal arithmetic. Thought it might be important that computers be able to handle base 10 arithmetic natively at the lowest levels. It's not. The 6502 also has some packed decimal arithmetic. Another very common type of instruction is general stack and call stack specific stuff such as PUSH, POP, CALL, and RETURN. They're just not an efficient way to do subroutine calls. Lose the stack, the CPU can run faster if it doesn't have to update a stack pointer a dozen times in succession. Like, if there are 2 PUSH instructions in a row, you'd really rather add 2 to the stack pointer once, instead of incrementing it twice. But you can't.

        More x86 specific shortcut instructions that weren't good ideas was the string manipulation stuff, CMPSB, MOVSB, SCASB, etc. Near worthless for string search. Lots of overhead in pointer updates that may be useless.

        The x87 math coprocessor was a real mess. They went with a stack architecture. You'd think, for the sake of consistency at least, they'd stick with a load and store design.

        Finally, the support for a multitasking, multiuser operating system was lacking. The 8088/86 didn't have anything at all, of course. As you say, the 286 had to be reset. The 386 was better, but still lacked an easy means of implementing semaphores. No atomic TEST and SET kind of instruction. It could still be done, but it was a pain to implement, and slow. It's why the Linux kernel maintainers dropped support for the 386. The 486 was the first of the x86 series with all the essential ingredients. Intel ought to have implemented such an instruction much earlier, certainly in the 386, if not the 286. But they weren't listening to OS designers.

        Even today, virtualization is an issue. We should not need VirtualPC software.

        • (Score: 3, Interesting) by maxwell demon on Thursday April 23 2020, @07:36PM (3 children)

          by maxwell demon (1608) on Thursday April 23 2020, @07:36PM (#986174) Journal

          The reason for the BCD instructions in the 8086/8088 (as well as some other special instructions like LAHF/SAHF) is the goal of a straightforward mechanical translation of 8080 code into 8086 code. Therefore it was essential that all 8080 functionality was either directly available on the 8086 (such as with BCD arithmetic) or there were fixed instruction sequences to achieve that (the 8080 always pushed the 8-bit accumulator together with the flags; this could be emulated on the 8086 with the LAHF/PUSH AX and POPAX/SAHF sequences). Also AFAIK the segmented architecture was for this reason: That way you could use 16-bit addresses in your programs (just as on the 8080), despite being able to access more memory in total.

          --
          The Tao of math: The numbers you can count are not the real numbers.
          • (Score: 2) by RS3 on Friday April 24 2020, @01:09AM (2 children)

            by RS3 (6367) on Friday April 24 2020, @01:09AM (#986319)

            I was thinking the BCD stuff was also for very simple systems that had 7-segment displays. Maybe? No?

            • (Score: 2) by maxwell demon on Friday April 24 2020, @06:57AM (1 child)

              by maxwell demon (1608) on Friday April 24 2020, @06:57AM (#986412) Journal

              In the 8080, that may well be. In the 8086, the question wouldn't arise, since the compatibility restraints meant the support would have to be there anyway.

              If Intel expected the 8086 (or the 8088) to be used in such systems, I have no idea. I would have expected embedded systems of that time to still go with 8 bit CPUs.

              --
              The Tao of math: The numbers you can count are not the real numbers.
              • (Score: 2) by RS3 on Friday April 24 2020, @03:19PM

                by RS3 (6367) on Friday April 24 2020, @03:19PM (#986503)

                Yes, I have to agree, and the 8085 filled that market nicely. You probably know on the 86/88, the address and data were multiplexed pins, so you needed several glue logic chips just to do anything with the micro. Maybe that was common then? Too lazy to look up other chip pinouts... Look how good we have it now with RAM and ROM and FLASH inside the micro.

        • (Score: 2) by RS3 on Friday April 24 2020, @03:40PM

          by RS3 (6367) on Friday April 24 2020, @03:40PM (#986511)

          Obviously all great points. Please remember, Intel had to sell these new chips to the microprocessor market. A CPU has to satisfy a lot of needs, including programmers. The programmers of the day did a lot of annoying BCD stuff, so having the instructions built in was super attractive. Same with string stuff. Even if it didn't get used a lot, it looked good being in there.

          Absolutely agree re: x87 interface. Really don't understand what the thinking was there. Speculating: it allows a variable-length parameter list. Maybe that was it?

          Re: push, pop- they're not just changing SP, but actually copy registers to RAM (or RAM to register), and it takes more instructions and clock cycles to do it manually. Even more so with calls and returns.

          I'd have to research it, but iirc, especially in protected mode, many simple RAM operations take a huge number of clock cycles, and stack stuff is far fewer, so maybe that's part of the thinking there.

          There were surely better CPUs, but IBM had many other considerations, including business competition. Most of us wish they had gone with Motorola. For many maybe even conflicting reasons, IBM went with Intel. Gotta wonder what the world would be like if they had gone with something else, RISC, or ???

        • (Score: 2) by TheRaven on Monday April 27 2020, @09:15AM (1 child)

          by TheRaven (270) on Monday April 27 2020, @09:15AM (#987486) Journal

          Another very common type of instruction is general stack and call stack specific stuff such as PUSH, POP, CALL, and RETURN. They're just not an efficient way to do subroutine calls. Lose the stack, the CPU can run faster if it doesn't have to update a stack pointer a dozen times in succession. Like, if there are 2 PUSH instructions in a row, you'd really rather add 2 to the stack pointer once, instead of incrementing it twice. But you can't.

          That's *very* microarchitecture specific. Remember that the 8086 was not even really a pipelined processor. Up to a simple in-order pipeline, you can do each of those back-to-back and so the dense encoding for push and pop is a win. There was a period when they were slow because the pipelines they were each read-modify-write instructions on the stack pointer and that resulted in pipeline stalls. That period ended about 5-10 years ago and modern x86 processors lazily update the stack pointer (in some microarchitectures, push and pop don't actually store at all in the common case, they just write to a register bank that is only flushed if you explicitly modify the stack pointer or if you write to some locked cache lines holding the top of the stack). Now, push and pop are more efficient than doing stores and then modifying %rsp again.

          In terms of encoding density, they're two bytes, which is about the smallest you can make the store of a register. With a general-purpose store you need at least the stack-pointer as the base, the register that you're going to use (you may special-case the stack pointer, but now you've defined a new set of opcodes and burned a lot of opcode space), the offset, and the register that you're storing, so getting that under two bytes is basically impossible. The only architecture that really manages it is AArch32, which has a 32-bit store-multiple instruction, which takes a base register, the increment direction (up or down, one bit) whether it's pre- or post-increment (one bit) and the base register (5 bits) and stores up to 16 registers in a single instruction. This is incredibly hard to implement efficiently on a modern microarchitecture.

          --
          sudo mod me up
          • (Score: 2) by bzipitidoo on Monday April 27 2020, @03:06PM

            by bzipitidoo (4388) on Monday April 27 2020, @03:06PM (#987543) Journal

            Lazy stack pointer updates? What changed to make that feasible? One reason for updating the stack pointer every time was in case an interrupt occurred. Interrupts complicate a lazy update policy, but at the current levels of billions of transistors, I guess it's no big deal any more.

            I've read that brag before, that x86 is an efficient assembly code that uses a low amount of space for instructions. No, it's not. Data compression can shrink an x86 binary significantly. The move from 32bit to 64bit made matters worse, drove the code density down. It's one reason why many Windows tablets with 64bit hardware often come with 32bit Windows 10. Difference in size between the 2 versions is 2G, and that matters on a device that has only 32G of storage space. And you speak of burning up opcode space? The x86 architecture does that in spades. It's carrying around a lot of baggage. No doubt they would like to dump the decimal arithmetic instructions and reclaim that opcode space. It was so badly done, too. The 6502 has a flag. If it's set, an ADD instruction will treat the data value as a packed decimal number, if it's not set, it will treat the data value as a 2s complement binary number. Same opcode for both cases. The x86, in contrast, wastes opcodes on instructions to "adjust" the value after performing some math. There's an adjustment instruction for each of the four basic arithmetic operations.

            And PUSH and POP are still bad ideas. Why? They're a special case MOV instruction, with some pointer manipulation that often is a waste of effort. So, why not just use a MOV instruction? The idea of "passing parameters on the stack", is awful if done with those instructions. PUSH a bunch of parameters, CALL the subroutine. Now the subroutine has to POP the return address that the CALL instruction just put on the stack, and save that somewhere, so it can then POP all the parameters, Same problem with passing a return value on the stack. POP the return address, PUSH the return value, PUSH the return address back on top of the stack, then do a RETURN. To avoid that, the subroutine is instead designed to just work directly with the values below the top of the stack, so that the return address need not be disturbed, and that can't be done with the POP and PUSH instructions. A program that does a lot of recursion can exhaust the stack space. The amount of stack space implicit in that system is just too inflexible. Could be wasting a lot of memory on stack space that is never used if it's too much, or forced to do some awkward stack space extension if it's not enough.

  • (Score: 3, Insightful) by looorg on Thursday April 23 2020, @04:47PM

    by looorg (578) on Thursday April 23 2020, @04:47PM (#986100)

    Odd list, I wonder how they define "worst". The worst one would be all the once that got binned at the drawing board or didn't survive the internal testing phase. Most of them doesn't seem to have been bad CPU:s at the time of release, but more of bad in hindsight or perhaps more correct would be that they became bad in that they couldn't live up to the (marketing-) hype or claims that had been made before or that it wasn't as good as some user 40ish years later think it should have been. Overall most of them appear to be "bad" due to the lack of a FPU or that they where very different compared to what was currently available, and considered to be the standard, and would have required a change of coder mindset/skills/compilers.

  • (Score: 3, Interesting) by Dr Spin on Thursday April 23 2020, @06:21PM

    by Dr Spin (5239) on Thursday April 23 2020, @06:21PM (#986147)

    I can't remember the manufacturer, but it was an early 8-bit MCU. I was using the NS COP400, which was new at the time, (about 1979) but other people in the lab had to use this device.
    Horror features included:

    Executing the code with the memory addresses in Grey-code order to avoid ripple carry. (try debugging that - assembly only in those days).
    Output ports had separate addresses for pull-ups and pull downs. Turning both on at the same time was the ever popular "chip destruct" feature.
    The same op-code did different things on different memory pages.

    Most programmers had nervous breakdowns after a few months exposure to this device.

    The National Semis COP400 (4-bit) was a popular device at the time, and superior in every possible way.

    --
    Warning: Opening your mouth may invalidate your brain!
  • (Score: 1, Insightful) by Anonymous Coward on Thursday April 23 2020, @06:25PM (13 children)

    by Anonymous Coward on Thursday April 23 2020, @06:25PM (#986149)

    This is one of those lists written down to invite disagreement and gather clicks without offering any value.

    • (Score: 3, Insightful) by martyb on Thursday April 23 2020, @07:47PM (10 children)

      by martyb (76) Subscriber Badge on Thursday April 23 2020, @07:47PM (#986182) Journal

      This is one of those lists written down to invite disagreement and gather clicks without offering any value.

      Perhaps you found nothing of value. So be it.

      I started programming before there even were personal computers. I remember reading about the Altair in Popular mechanics. The first computer I bought had a 6502 CPU and 4KB of static RAM. At the time I was one of at most 5% of my classmates at a top engineering school who had their own computer.

      From this story and the comments posted here, I have learned of processor families I had never heard of before, learned why the TI 99/4A I played with at a store display was so dog slow, additional reasons why the i860 was poorly received... that's just off the top of my head.

      There was such a flurry of new processors and architectures in the early days of computing, it was just not possible for me to keep up with all the vagaries of all of them. This story filled in some gaps for me. I look forward to seeing what other comments may be posted here so I may learn even more.

      --
      Wit is intellect, dancing.
      • (Score: 0) by Anonymous Coward on Thursday April 23 2020, @09:23PM (6 children)

        by Anonymous Coward on Thursday April 23 2020, @09:23PM (#986210)

        Are... you serious? I have a similar history and... well, honestly now I'm going to take inventory and make sure I never sound like that.

        Anyway, whatever you got out of it there's no escaping the fact that you published a low-effort clickbait article. I mean tell me you don't think this wasn't a "look at our ads, discuss" job.

        • (Score: 3, Insightful) by martyb on Thursday April 23 2020, @11:53PM (5 children)

          by martyb (76) Subscriber Badge on Thursday April 23 2020, @11:53PM (#986280) Journal

          You apparently saw the story as being [less than] half-empty. I saw it (in conjunction with the comments here) as being half-full.

          Please feel free to submit a better story and I will be more than happy to push it out to the site.

          --
          Wit is intellect, dancing.
          • (Score: 2, Interesting) by anubi on Friday April 24 2020, @01:25AM (2 children)

            by anubi (2828) on Friday April 24 2020, @01:25AM (#986328) Journal

            Marty, I really appreciated you running this story. Like you, I learned a lot about other people's live experience with things I had read of but had never been there or done that.

            During the heyday, I designed a 68000 based CPU board to replace a TI9900 design that was having parts availability issues. I briefly ( like in the order of seconds ) considered an 80286, but there was no way I was going to saddle our programmer with programming it in assembler! Ours ran wire bonders in realtime...we had to know exactly what that machine was doing and exactly how long it takes to do it. I was at the intersection of code, control systems, and inertial physics. Even the slightest variations in timing resulted in multiresonant chaos in the machinery that resulted in rejected product. It was either perfect, or it was not. And there was no way we were gonna ship anything less than perfect to our customer. A lot of small companies have that mindset.

            These days, I like to use many AVR chips ( Arduino clones ) running simultaneously and Parallax Propellers to do realtime control. And still program in assembler. I still need very fine control over timing and interrupts once time critical sequences are launched. I get so damned picky over timing I even have all the processor cores clocked off the same physical crystal....to eliminate phasing artifacts and the resultant moire type artifacts they produce.

            I did not like that 286. I found it too damm awkward to program in assembler and I hate keeping track of segmentation registers with a purple passion.

            --
            "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
            • (Score: 2) by martyb on Tuesday April 28 2020, @01:41PM (1 child)

              by martyb (76) Subscriber Badge on Tuesday April 28 2020, @01:41PM (#987807) Journal

              Glad you liked the story; thanks so much for the kind words!

              You seem to have much more "close to the metal" experience than I. Oh, I'd worked a bunch with assembler way back when, but I prefer coding on top of an OS with all the conveniences their abstractions provided.

              --
              Wit is intellect, dancing.
              • (Score: 1) by anubi on Wednesday April 29 2020, @09:34AM

                by anubi (2828) on Wednesday April 29 2020, @09:34AM (#988138) Journal

                Thanks! It's an honor and privilege to exchange war stories with others in the trenches.

                It takes all types. Building something like SoylentNews to me is a black art. I have my plate full with just microcontrollers. Now, those, I can play with until I know know them and their interfacing to the real world end-to-end. But get beyond C++, and I'm quickly lost. The languages seem simple enough - but it's all those little details that have me wasting way too much time barking up the wrong tree.

                Thanks for all the work I've seen you putting into running these forums. For many of us, it's our main link to the other soldiers in other trenches. Fighting ignorance. Trying to build a solid public foundation to store our accumulated knowledge in.

                When all is said and done, what we keep is what we share...our humanity, love, art, and science. All else rots back to the oblivion from which it came.

                --
                "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
          • (Score: 0) by Anonymous Coward on Friday April 24 2020, @04:03AM (1 child)

            by Anonymous Coward on Friday April 24 2020, @04:03AM (#986383)

            I spoke to the article, stating information *I* would want before clicking the link, and apparently I'm not alone. But your reply made me go, "hmm, this couldn't possibly be the poster of the link displaying his fragile ego, could it?" 'Til I looked at the username, then, with an "oh no" scrolled farther up to confirm my suspicions.

            Like dude, yeah, people are going to question your editorial decisions, indirectly or otherwise. Get over it and ffs don't condescend on some pointless basis, as though being over the hill is somehow pertinent, and even more surreal: unique. Seeing as I didn't bring your part into this, *you* did, answer yourself this: why would I want to interact with editorial staff who takes entirely impersonal things so personally? Again, it's not your performance that was in question so the invitation to compete with it just comes off as unwarranted hostility.

            Anyway, I'm going to go put some zeros in a file and you can say, "good riddance, glad I won't have to put up with that ass anymore," and everybody will be better off.

            • (Score: 2) by martyb on Tuesday April 28 2020, @12:21PM

              by martyb (76) Subscriber Badge on Tuesday April 28 2020, @12:21PM (#987793) Journal

              Did you, by chance, read the linked article? Each of the listed processors had a full write-up identifying the CPU's shortcomings and explaining the problems those caused.

              I see that the story summary just listed the CPUs. The original submitter neglected to provide an ellipsis to show that things were omitted. I cleaned up the list and converted the explicit enumeration provided with proper HTML: <OL>, <li&gr;, etc. After, of course, confirming the CPUs listed were correct and in order. I saw that on first read, but somehow failed to add those in, myself. Here is what the list more properly should have looked like:

              1. Intel Itanium [...]
              2. Intel Pentium 4 (Prescott) [...]
              3. AMD Bulldozer [...]
              4. Cyrix 6×86 [...]
              5. Cyrix MediaGX [...]
              6. Texas Instruments TMS9900 [...]

              It seems to be a tradition on this site to not read the linked article, and my omission of the ellipses certainly added no incentive to look further. That was my mistake; I apologize for the oversight.

              Oh, and the linked story also provided a list (with explanations) of "Honorable Mentions" — CPUs that were deemed "bad" but not to the same level as those listed here.

              --
              Wit is intellect, dancing.
      • (Score: 3, Funny) by RS3 on Friday April 24 2020, @01:27AM (2 children)

        by RS3 (6367) on Friday April 24 2020, @01:27AM (#986330)

        Hmm, you sound like a buyer for my Kim-1. :)

        • (Score: 2) by martyb on Tuesday April 28 2020, @12:02PM (1 child)

          by martyb (76) Subscriber Badge on Tuesday April 28 2020, @12:02PM (#987789) Journal

          Hmm, you sound like a buyer for my Kim-1. :)

          I remember when they came out. IIRC, they required additional external hardware to be of any practical use. From memory, it had a 6502 with 1KB of memory? Apparently, yes [wikipedia.org].

          I had the great fortune to have access (via dialup access with an acoustical coupler and a genuine, yellow-roll-of-paper Teletype) to a DEC PDP/8 back in 1972 or so.

          The paucity of I/O available on the KIM-1 for the amount of money charged did not seem a fair exchange for me at the time. Finances forced me to wait.

          It was a few years later (and after I got to college) when I finally pulled the trigger and bought an OSI Challenger 4-P [wikipedia.org], picture [wikipedia.org]. It came complete with a keyboard, RF-adapter (color!) to connect to a TV for output, and a cassette interface for storing/loading programs. Powered by a 6502, it came with 4 KB of (static?) RAM.

          It cost me on-the-order-of two-month's work to be able to save enough to purchase it -- if memory serves for about $250. That was back when minimum wage was less than $2.00 per hour! I got a lot of use out of that little computer until I became astounded at seeing Star Raiders [wikipedia.org] and purchased an Atari 800 [wikipedia.org].

          I had a classmate who had a Commodore PET [wikipedia.org] which helped incentivize me to get my own computer, but mindful to avoid a Chiclets keyboard, too.

          So, thanks, but no thanks. -)

          --
          Wit is intellect, dancing.
          • (Score: 2) by RS3 on Tuesday April 28 2020, @04:50PM

            by RS3 (6367) on Tuesday April 28 2020, @04:50PM (#987892)

            Ah-ha! A "softie". My roots are hw eng., but I do sw and systems equally well too. I kind of inherited my Kim-1. It has a cassette interface for loading / storing programs, for which you might have had to solder up a couple of simple wires and 2 connectors.

            My understanding, and from a hw perspective, is the Kim-1 was made by MOS Technology to be a hardware development system- to facilitate hardware device / peripheral development more than software. Gotta have some hardware for the sw people to work with! People like Apple and Ohio Scientific would buy a Kim-1 and build around it- chips, whole systems, displays, inputs, peripheral controllers, whatever. Very much an open-source development concept of its day- everything well documented and development encouraged- again, more at the hardware level. To a hardware person, Kim-1 was the Arduino of its day.

            IIRC, the IBM PC grew so fast because IBM made the machines with 5, then 8 open slots, fully documented the hardware (ISA bus) and BIOS, making it easy for 3rd parties to develop peripherals. And boy did they and fast.

            Main sort of philosophical difference between IBM's and MOS Tech.'s approaches was that IBM defined the bus. But remember, MOS Technology was a chip designer, not a full system designer/maker. The Kim-1 had 2 bus connectors, which was too big for lower-end systems, and the 6502 wasn't powerful enough to become the heart and soul of a medium-scale system like a VME bus. https://en.wikipedia.org/wiki/VMEbus [wikipedia.org] I don't know enough computer history to understand, I think an S-100 bus computer based on a 6502 might have been a fairly big deal for a while.

            Looking at the pics of the guts, the OSI Challenger looks like a rearranged Kim-1, which makes sense.

            Do you still have your OSI Challenger?

    • (Score: 2) by Bot on Thursday April 23 2020, @07:48PM

      by Bot (3902) on Thursday April 23 2020, @07:48PM (#986183) Journal

      I don't agree with your assessment, lemme check *clicky* *clicky*

      --
      Account abandoned.
    • (Score: 2) by RS3 on Friday April 24 2020, @01:19AM

      by RS3 (6367) on Friday April 24 2020, @01:19AM (#986325)

      This is one of those lists written down to invite disagreement and gather clicks without offering any value.

      This is one of those comments written down to invite disagr... well, it's pretty much between troll and flamebait.

      I've got an idea, instead of sharing your wisdom in this indelible Internet forum, why don't you go buy a hammer and stone chisel and write your thoughts in cuneiform into some rocks somewhere? Preferably below the Antarctic circle.

(1) 2