Processors are built by multi-billion-dollar corporations using some of the most cutting-edge technologies known to man. But even with all their expertise, investment, and know-how, sometimes these CPU makers drop the ball. Some CPUs have just been poor performers for the money or their generation, while others easily overheated or drew too much power.
Some CPUs were so bad that they set their companies back generations, taking years to recover.
But years on from their release and the fallout, we no longer need to feel let down, disappointed, or ripped off by these lame-duck processors. We can enjoy them for the catastrophic failures they were, and hope the companies involved learned a valuable lesson.
Here are some of the worst CPUs ever made.
Note: Plenty of people will bring up the Pentium FDIV bug here, but the reason we didn't include it is simple: Despite being an enormous marketing failure for Intel and a considerable expense, the actual bug was tiny. It affected no one who wasn't already doing scientific computing, and, in technical terms, the scale and scope of the problem were never estimated to be much of anything. The incident is recalled today more for the disastrous way Intel handled it than for any overarching problem in the Pentium microarchitecture.
Intel Itanium
Intel's Itanium was a radical attempt to push hardware complexity into software optimizations. All the work to determine which instructions to execute in parallel was handled by the compiler before the CPU ran a byte of code.
Analysts predicted that Itanium would conquer the world. It didn't. Compilers were unable to extract necessary performance, and the chip was radically incompatible with everything that had come before it. Once expected to replace x86 entirely and change the world, Itanium limped along for years with a niche market and precious little else.
Itanium's failure was particularly egregious because it represented the death of Intel's entire 64-bit strategy (at the time). Intel had originally planned to move the entire market to IA64 rather than extend x86. AMD's x86-64 (AMD64) proved quite popular, partly because Intel had no luck bringing a competitive Itanium to market. Not many CPUs can claim to have failed so egregiously that they killed their manufacturers' plans for an entire instruction set.
Intel Pentium 4 (Prescott)
Prescott doubled down on the Pentium 4's already-long pipeline, extending it to nearly 40 stages, while Intel simultaneously shrank it down to a 90nm die. This was a mistake.
The new chip was crippled by pipeline stalls that even its new branch prediction unit couldn't prevent, and parasitic leakage drove high power consumption, preventing the chip from hitting the clocks it needed to be successful. Prescott and its dual-core sibling, Smithfield, are the weakest desktop products Intel ever fielded relative to its competition at the time. Intel set revenue records with the chip, but its reputation took a beating.
Its reputation for running rather toasty would be a recurring issue for Intel in the future, too.
AMD Bulldozer
AMD's Bulldozer was supposed to steal a march on Intel by cleverly sharing certain chip capabilities to improve efficiency and reduce die size. AMD wanted a smaller core with higher clocks to offset any penalties from the shared design. What it got was a disaster.
Bulldozer couldn't hit its target clocks, drew too much power, and its performance was a fraction of what it needed to be. It's rare that a CPU is so bad that it nearly kills the company that invented it. Bulldozer nearly did. AMD did penance for Bulldozer by continuing to use it. Despite the core's flaws, it formed the backbone of AMD's CPU family for the next six years.
Fortunately, during the intervening years, AMD went back to the drawing board, and in 2017, Ryzen was born. And the rest is history.
Cyrix 6x86
Cyrix was one of the x86 manufacturers that didn't survive the late 1990s. (VIA now holds its x86 license.) Chips like the 6x86 were a major part of the reason why.
Cyrix has the dubious distinction of being the reason why some games and applications carry compatibility warnings. The 6x86 was significantly faster than Intel's Pentium in integer code, but its FPU was abysmal, and its chips weren't particularly stable when paired with Socket 7 motherboards. If you were a gamer in the late 1990s, you wanted an Intel CPU but could settle for AMD. The 6x86 was one of the terrible "everybody else" chips you didn't want in your Christmas stocking.
The 6x86 failed because it couldn't differentiate itself from Intel or AMD in a way that made sense or gave Cyrix an effective niche of its own. The company tried to develop a unique product and wound up earning itself a second place on this list instead.
Cyrix MediaGX
The Cyrix MediaGX was the first attempt to build an integrated SoC processor for desktop, with graphics, CPU, PCI bus, and memory controller all on one die. Unfortunately, this happened in 1998, which means all those components were really terrible.
Motherboard compatibility was incredibly limited, the underlying CPU architecture (Cyrix 5x86) was equivalent to Intel's 80486, and the CPU couldn't connect to an off-die L2 cache (the only kind of L2 cache there was, back then). Chips like the Cyrix 6x86 could at least claim to compete with Intel in business applications. The MediaGX couldn't compete with a dead manatee.
The entry for the MediaGX on Wikipedia includes the sentence "Whether this processor belongs in the fourth or fifth generation of x86 processors can be considered a matter of debate." The 5th generation of x86 CPUs is the Pentium generation, while the 4th generation refers to 80486 CPUs. The MediaGX shipped in 1997 with a CPU core stuck somewhere between 1989 and 1992, at a time when people really did replace their PCs every 2-3 years if they wanted to stay on the cutting edge.
It also notes, "The graphics, sound, and PCI bus ran at the same speed as the processor clock also due to tight integration. This made the processor appear much slower than its actual rated speed." When your 486-class CPU is being choked by its own PCI bus, you know you've got a problem.
Texas Instruments TMS9900
The TMS9900 is a noteworthy failure for one enormous reason: When IBM was looking for a chip to power the original IBM PC, it had two basic choices to hit its own ship date: the TMS9900 and the Intel 8086/8088 (the Motorola 68K was under development but wasn't ready in time).
The TMS9900 only had 16 bits of address space, while the 8086 had 20. That made the difference between addressing 1MB of RAM and just 64KB. TI also neglected to develop a 16-bit peripheral chip, which left the CPU stuck with performance-crippling 8-bit peripherals. The TMS9900 also had no on-chip general purpose registers; all 16 of its 16-bit registers were stored in main memory. TI had trouble securing partners for second-sourcing and when IBM had to pick, it picked Intel.
Good choice.
Intel Core i9-14900K
It's rare to call a top chip of its generation a "bad" CPU, and even rarer to denigrate the name of a company's current fastest gaming CPU, but with the Intel 14900K, it deserves its place on this list. Although it is fantastically fast in gaming and some productivity workloads, and can compete with some of the best chips available at the end of 2025, it is still a bad CPU for a range of key reasons.
For starters, it barely moved the needle. The 14900K is basically an overclocked 13900K (or 13900KS if we're considering special editions), which wasn't much different from the 12900K that came before it. The 14900K was the poster child for Intel's lack of innovation, which is saying a lot considering how long Intel languished on its 14nm node.
The 14900K also pulled way too much power and got exceptionally hot. I had to underclock it when reviewing it just to get it to stop thermal throttling—and that was on a 360mm AIO cooler, too.
The 14th-generation was plagued with bugs and microcode issues, too, causing crashes and stability issues that required regular BIOS updates to try to fix.
The real problem was that the rest of the range was just better. The 14600K is almost as fast in gaming despite being far cheaper, easier to cool, easier to overclock, and less prone to crashes. The rest of the range wasn't too exciting, though the 14100 remains a stellar gaming CPU under $100 today.
The 14900K was the most stopgap of stopgap flagships. It was a capstone on years of Intel stagnation, and a weird pinnacle in performance at the same time. It's not as big a dud as the other chips on this list, but it did nothing to help Intel's modern reputation, and years later, it's still trying to course-correct.
Dishonorable Mention: Qualcomm Snapdragon 810
The Snapdragon 810 was Qualcomm's first attempt to build a big.LITTLE CPU and was based on TSMC's short-lived 20nm process. The SoC was easily Qualcomm's least-loved high-end chip in recent memory—Samsung skipped it altogether, and other companies ran into serious problems with the device.
Qualcomm claimed that the issues with the chip were caused by poor OEM power management, but whether the problem was related to TSMC's 20nm process, problems with Qualcomm's implementation, or OEM optimization, the result was the same: A hot-running chip that won precious few top-tier designs and is missed by no one.
Dishonorable Mention: IBM PowerPC G5
Apple's partnership with IBM on the PowerPC 970 (marketed by Apple as the G5) was supposed to be a turning point for the company. When it announced the first G5 products, Apple promised to launch a 3GHz chip within a year. But IBM failed to deliver components that could hit these clocks at reasonable power consumption, and the G5 was incapable of replacing the G4 in laptops due to high power draw.
Apple was forced to move to Intel and x86 in order to field competitive laptops and improve its desktop performance. The G5 wasn't a terrible CPU, but IBM wasn't able to evolve the chip to compete with Intel.
Ironically, it would be Intel years later that couldn't compete with ARM that would lead Apple to build its own silicon in the M-series.
Dishonorable Mention: Pentium III 1.13GHz
The Coppermine Pentium III was a fine architecture. But during the race to 1GHz against AMD, Intel was desperate to maintain a performance lead, even as shipments of its high-end systems slipped further and further away (at one point, AMD was estimated to have a 12:1 advantage over Intel when it came to actually shipping 1GHz systems).
In a final bid to regain the performance clock, Intel tried to push the 180nm Cumine P3 up to 1.13GHz. It failed. The chips were fundamentally unstable, and Intel recalled the entire batch.
Dishonorable Mention: Cell Broadband Engine
We'll take some heat for this one, but we'd toss the Cell Broadband Engine on this pile as well. Cell is an excellent example of how a chip can be phenomenally good in theory, yet nearly impossible to leverage in practice.
Sony may have used it as the general processor for the PS3, but Cell was far better at multimedia and vector processing than it ever was at general-purpose workloads (its design dates to a time when Sony expected to handle both CPU and GPU workloads with the same processor architecture). It's quite difficult to multi-thread the CPU to take advantage of its SPEs (Synergistic Processing Elements), and it bears little resemblance to any other architecture.
It did end up as part of a linked-PS3 supercomputer built by the Department of Defense, which shows just how capable these chips could be. But that's hardly a daily-driver use case.
What's the Worst CPU Ever?
It's surprisingly difficult to pick an absolute worst CPU. All of the ones on this list were bad in their own way at that specific time. Some of them would have been amazing if they'd been released just a year earlier, or if other technologies had kept pace.
Some of them just failed to meet overinflated expectations (Itanium). Others nearly killed the company that built it (Bulldozer). Do we judge Prescott on its heat and performance (bad, in both cases) or on the revenue records Intel smashed with it?
Evaluated in the broadest possible meanings of "worst," I think one chip ultimately stands feet and ankles below the rest: the Cyrix MediaGX. Even then, it is impossible not to admire the forward-thinking ideas behind this CPU. Cyrix was the first company to build what we would now call an SoC, with PCI, audio, video, and RAM controller all on the same chip. More than 10 years before Intel or AMD would ship their own CPU+GPU configurations, Cyrix was out there, blazing a trail.
It's unfortunate that the trail led straight into what the locals affectionately call "Alligator Swamp."
Designed for the extreme budget market, the Cyrix MediaGX disappointed just about anyone who ever came in contact with it. Performance was poor—a Cyrix MediaGX 333 had 95% the integer performance and 76% of the FPU performance of a Pentium 233 MMX, a CPU running at just 70% of its clock. The integrated graphics had no video memory at all. There's no option to add an off-die L2 cache, either.
If you found this under your tree, you cried. If you had to use this for work, you cried. If you needed to use a Cyrix MediaGX laptop to upload a program to sabotage the alien ship that was going to destroy all of humanity, you died.
All in all, not a great chip. Others were bad, sure, but none embody that quite like the Cyrix MediaGX.
You might not agree with these choices. If you do not, then tell us your own favourite "worst" CPU and why you think it deserves a mention. Has anybody got information on the worst CPUs that have been produced in Russia or China yet?
(Score: 4, Interesting) by Snotnose on Tuesday January 06, @12:12AM (2 children)
Don't remember what it was called, but it reads just like what I remember watching.
Interesting list. I disagree with some choices, but it's been 30 years now and my memory is fading.
Recent research has shown that 1 out of 3 Trump supporters is a stupid as the other 2.
(Score: 3, Informative) by turgid on Tuesday January 06, @08:18AM (1 child)
I don't remember the Cyrix 6x86 being that bad. Certainly, its floating point speed was less than the equivalent Intel CPU at the same clock rate but its integer performance was better. I could have upgraded my aging Intel Pentium 100 with a 166MHz Cyrix 6x86 but didn't. I used to be too suspicious of non-intel stuff. I really would have benefitted. I was running Linux exclusively by then and compiling lots of code. The extra integer performance and memory bandwidth would have been a big win.
Remember when AMD bought NexGen? Then they really took off. My next machine was a K6/2 400MHz. Again, the floating point wasn't great, but integer was spectacular. It also had 3D Now, SIMD floating point which used the 387/MMX registers. I wrote some code for it. It was awesome. Plus is was very inexpensive compared with anything Intel was selling. A year later I bought the 500MHz version.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 3, Informative) by Unixnut on Tuesday January 06, @06:30PM
Seconded, I remember using Cyrix CPUs and not finding them "bad". They were perfectly usable for general purpose computing, and cheaper than the equivalent Intel. They were particularly popular in the budget "home PC" whitebox industry.
I also found they ran cooler than equivalent Intel CPUs, which allowed for fanless designs. They were my first entry into "quiet computing" because back then, computers were loud (even the 386/486 machines were loud, despite the CPUs not having fanned heatsinks most of the time).
I remember I originally built a HiFi "mp3 CD" player using the Cyrix 6x86 as a fanless design with an old Socket 7 motherboard, with an integer only version of mpg321 and lcdproc with keypad, and it performed well.
Indeed I was impressed enough with the CPUs that when VIA bought them out and integrated them into their Mini-ITX [wikipedia.org] boards I was an avid user of them, having fanless desktops with CF card-IDE adaptors with lightweight Linux distros like DSL [wikipedia.org] that could give you a usable desktop in 50MB of space.
To this day I still have an EPIA 5000 running (despite the EoL of 386 Linux), but once I retire that it will the last of the Cyrix derived CPUs that I will have used.
Overall, I think Cyrix was not bad at all, tech wise at least.
(Score: 5, Informative) by Rich on Tuesday January 06, @02:19AM (15 children)
Is this an Intel marketing article?
The Core i9-14900K's flaws weren't that it had lackluster performance in comparison to its predecessor, it was that 13th and 14th gen Core CPUs would die within weeks to months because of electromigration issues on the silicon. It was that bad of a fuckup that Intel changed its generation naming after the incident to confuse customers.
Are we supposed to forget that over all the other minor issues mentioned in the article?
Also, when mentioning Intel, they shouldn't forget the iAPX432, which, on a project scale, was their biggest screwup.
The TMS9900 has a bad reputation because of the mindboggling pile of garbage TI put around it to build the 99/4. Otherwise, it was fairly decent, just banking on memory staying as fast as CPUs (like the 6502 also did). As far as "we believe in (cross)compilers" goes, the Transmeta Crusoe belongs on the list together with the Itanic.
(Score: 4, Funny) by driverless on Tuesday January 06, @08:35AM (11 children)
Gawd, kids these days. No mention of the 32016, i960, i432, 9900, 1802, and then the most awful "successful" CPUs, MSP430, PIC, ...
(Score: 2) by turgid on Tuesday January 06, @12:27PM (4 children)
The i960 wasn't that bad [cpushack.com], was it? Then there was the i860 [cpushack.com] for the crunching of numbers.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by driverless on Tuesday January 06, @12:38PM (3 children)
The i960 wasn't bad per se, in fact it was pretty nice compared to Intel's other major CPU at the time, but it was so difficult to program that it qualifies in the "bad" (as in PITA) category. And it was part of a long sequence of fail starting with the i432 going via BiiN (Billions invested in Nothing) through to 960/860. And then Intel repeated the mistakes from the i860 with the Itanium (VLIW design, compilers incapable of delivering the expected performance).
(Score: 3, Interesting) by turgid on Tuesday January 06, @01:23PM (2 children)
In those days, compilers were still pretty primitive and sometimes buggy. I seem to remember that when things like superscalar and out-of-order execution came along, it took compilers a while to catch up. VLIW is a completely different kettle of fish. It's apparently good for DSP, where you have very predictable workloads with many numerical operations that can be done in parallel. I seem to remember DSP chips mostly having hand-coded libraries of all the useful routines since the compiler technology just wasn't there.
Itanium was VLIW taken to another extreme. It was effectively a massive DSP chip and it was wrongly targeted at general-purpose computing. From reading Linus's rants on the subject, what I managed to glean was that there was no way any compiler was ever going to give the level of optimisation required on itanium to make it significantly faster than other, more conventional designs, even when they became good at packing the instructions into the VLIW bundles.
I think it all comes down to the non-determinism inherent in general purpose computing. It's related to the Halting Problem but a bit different. You can not predict in advance what state your CPU will be in because you don't know what external events will happen at any given time. The world is asynchronous, from interrupts to user input and choices. If you could predict perfectly in advance what values needed to be loaded, computed and stored then you could optimise the program perfectly (in theory).
You just can't do that at compile time. You can't predict the future. Other RISC processors had things like out-of-order execution, branch prediction, speculation, instruction re-ordering, super-scalar and all that in hardware which worked dynamically, at run time. The idea with itanium was to do away with that to give more room for registers, cache and execution units. It turned out to be the wrong decision.
The Transmeta Crusoe did those things in software internally, and it worked fairly well for a while but their hardware couldn't keep up. The HotSpot JVM could also do some clever JIT optimisations at run-time based on program state.
I spoke to a guy about itanium when it was under development. He told me it came from some dude's university project that he never got working then and neither in the next place but then intel picked it up.
Maybe someone who knows more could fill us in?
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by driverless on Wednesday January 07, @02:47AM (1 child)
You mean John Hennessey and David Patterson? I think their work did become successful.
(Score: 2) by turgid on Wednesday January 07, @08:15AM
No, itanic wasn't their shipwreck.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by driverless on Tuesday January 06, @12:44PM
Forgot to add, the MediaGX was a long way from the worst processor. In particular it begat the Geode which was quite successful in the embedded x86 space.
(Score: 2) by sgleysti on Wednesday January 07, @12:46AM
8-bit PICs are really fun to program in assembly.
(Score: 2, Insightful) by anubi on Wednesday January 07, @10:41AM (3 children)
I had to write some gawdawful machine code for the 1802 ( the RCA CMOS 8-bitter ). My code was so crappy even I could not keep it straight.
I sure was happy about Rockwell picking up the 6502 and making the CMOS version, the 65C02. I remember being so impressed about being able to phaselock the 65C02 processor clock to a communication line, then achieve extremely elegant hardware and code for interchip communication, as well as the way they used the clock polarity to free the bus so I could design simple DMA access.
I ended up working for Rockwell because they made that chip, and I had great plans for what I wanted to do with it, then the line shut down no sooner than I joined the company. Since the I/O was memory mapped, it was so easy to have as many processors as I wanted running in parallel, each running it's won function ( data acquisition , communication, and csv block management ), while the central CPU kept everyone fed and collected their contribution. For very little power and no fans. I wanted to build it for use in oil fields where we had harsh conditions and electrical power was sometimes very hard to get. Since we had cassette recorders, I wanted to build my processors to store oil well production data in blocks of .csv data records, timestamped , no less. The technology was just coming out when I could store data into cassette tape, and a 300 baud GTE Lenkurt modem. I found I could write the data directly to the tape head and play it back through the phone and the home office could retrieve the file and present it to their DEC machine.
I built and tested the pieces, but left Chevron before I ever built the machine. That first try was futile anyway. It was 8080 - based. Way too complex. The 6502 was still a couple of years in the future. The ,65C02 and 2K CMOS Static RAM a couple of years more. We had an oil glut show up, Chevron didn't want to pursue my dream, and others were beginning to provide off-the-shelf solutions. Then no sooner than I showed up, they shut down too. I wanted to gather a line of microcontroller "tinker toys" configurable for the specific application and curate means of interface to the physical world. What were expensive interfaces of the day often boiled down to a microprocessor, some analog parts, comparators, switches. Dual-slope ADC, VCO based ADC, several kinds of DAC. Just about anything. Most of the hard work could be repurposed. Just have to make some custom interfaces. Kinda like a modern Arduino.
Technically, I knew what I had to do, but business wise, I could not meet schedule without cutting so many corners that what I could make under hire was basically useless. This is something I must do as a hobby. Under schedule and supervised, I am only prone to make junk. The same poorly-considered junk I retired from because I did not like to make that which I despise.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 3, Informative) by driverless on Wednesday January 07, @10:58AM (2 children)
For people not familiar with it, it was the first CMOS processor and came with a lot of tradeoffs, it had a pile of 16-bit registers but only an 8-bit accumulator so 99% of your code ended up being the accumulator acting as a butler to move data 8 bits at a time from hi/lo portions of 16-bit registers, and the rest was spent dealing with the control-flow handling, branches that changed depending on how far away the target was, hand-assembling function calls and returns via the 16-bit registers (in turn hand-assembled in 8-bit pieces, did I mention that?), and in general spending more time managing the CPU's internals than actually doing things. It was more like a microprogram than what you'd expect from, say, a 6502.
(Score: 1) by anubi on Wednesday January 07, @11:51PM (1 child)
Glad to see someone else got to see it too.
The frustration I went through made me appreciate good hardware design so much more. It was also one of my first microprocessor experiences. That and the 8080. I wasn't impressed with the 8080's clock generator.
My interest was piqued by the static clock capabilities of the CMOS chips that let me save processor state indefinitely by stopping the clock. My specialty was designing stuff that ran on minimal power, as what I made was usually in places of environmental extremes, and my biggest problems were weight, power requirement, heat dissipation, and surviving environmental extremes.
It was very challenging until investors bought the company and brought in another processor I had no idea how to communicate with it. Two completely different operating systems fine tuned for two completely different things. Mine ran on the laws of physics and electron flows, the other ran on the laws of economics and cash flow.
Well, I had my day during the 60's thru the 80's. When we made things with physical parts. Today, it's too expensive to design custom stuff like we used to.
We use economies of scale to mass produce goods in the cheapest place, run it till it breaks, send it to the landfill, buy the latest good ( with borrowed money or by selling generational wealth ). Often the later good is not as good as the old one was
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 2) by driverless on Thursday January 08, @02:47AM
They also famously did a SOS (Silicon-on-Sapphire) version for space use. It's good to think then when one day space aliens find one of these things the architecture will seem quite logical to them.
(Score: 3, Interesting) by VLM on Tuesday January 06, @03:01PM (2 children)
If you're ever bored crack open the databooks on that. Made IBM mainframes look RISC. Heck it makes modern PCs even with all their crazy extensions look RISC.
It had machine language OO support and a garbage collector in the firmware.
It had hardware acceleration support for IPC. Most CPUs can "grudgingly do IPC" but this was wild.
Variable length non byte aligned instruction set... well then. I can't immediately think of another example of that. I mean yeah sure octal machines, but random length part seems crazy. I guess if your code lives in a hardware supported "executable object container" then it doesn't matter if it aligns or not. Wild stuff.
It had the feel of the hardware guys were not allowed to say "no" to the software guys.
For fun I just searched for a FPGA version of the 432, nobodies been crazy enough to try it.
(Score: 4, Interesting) by Rich on Tuesday January 06, @03:41PM
We've got the benefit of hindsight now. How the future looks wasn't that clear until the 386 came and stuck. Up until the early 80s, CPUs had all kinds of weird segmentation and banking concepts and somehow it was en vogue to have the most complicated cross-process subroutine call opcodes. The iAPX432 just was the pinnacle of this development. On the other hand, it's depressing how Motorola wasted away their five year lead on the entire remaining industry. But again, with today's knowledge once could become the richest man in the world if one was timewarped into 1972 (and chose to refrain from the usual 70s indulgence).
(Score: 3, Insightful) by driverless on Wednesday January 07, @11:02AM
If you want something in the same vein, look at the Rekursiv. Attempted by a Scottish hifi manufacturer, because why not. I collapsed under the second-system effect even though it was a first, or possibly zeroth, system.
(Score: 5, Insightful) by JoeMerchant on Tuesday January 06, @02:55AM (1 child)
Yes, the FDIV bug was "minor" in the greater scheme of technical competence and effects on real world performance, however:
it was a highly visible cultural fail of epic proportion. You just don't let that kind of mistake get into production, period. Your design controls, process verification and validation, and pre-production testing need to be 100.000% for obvious problems, and FDIV was an obvious problem, once it was pointed out.
On other Intel SNAFUs - anybody else have horror stories from the Skylake (6th gen) launch era? They eventually got most of the issues under control, but there again, the issues should never have found their way into volume production in the first place: FDIV all over again, but this time with BIOS and other issues.
🌻🌻🌻 [google.com]
(Score: 3, Informative) by turgid on Tuesday January 06, @08:24AM
You just don't let that kind of mistake get into production, period.
Oh yes you do. If you're a manager, you can get the quarterly numbers to look spectacular two or three times in a row while you take on that technical debt. Big bonuses, promotions, the latest Audi.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 0) by Anonymous Coward on Tuesday January 06, @03:06AM (2 children)
F8
SC/MP
2650
(Score: 2) by turgid on Tuesday January 06, @12:23PM
Have a helpful link [cpushack.com].
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by driverless on Thursday January 08, @03:25AM
Ah yes, the repressed-memories part of my brain thanks you for recalling that lot for me.
(Score: 5, Informative) by coolgopher on Tuesday January 06, @03:46AM
I'm surprised they included FDIV but left out F00F [wikipedia.org]. The latter caused mayhem for multi-user systems. I remember hearing about one ISP which had to deal with frequent server lockups due to some customers being "funny".
(Score: 5, Funny) by Cyrix6x86 on Tuesday January 06, @05:20AM
I'm sorry, WHAT?!?!
-1, Disagree
(Score: 4, Touché) by jb on Tuesday January 06, @07:34AM (5 children)
"Almost anything ever made by Intel" (or any of their clones).
Seriously, the x86 instruction set has got to be ugliest ever devised.
(Score: 3, Insightful) by Bentonite on Tuesday January 06, @07:55AM (1 child)
I agree - somehow the AMD64 instruction set is less ugly.
(Score: 2) by turgid on Tuesday January 06, @03:29PM
But not when viewed using objdump -d.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by looorg on Tuesday January 06, @11:11AM (2 children)
Looking at it the common trend appears to be anything x86/64. I agree. Awful CPU and instruction set. Worst. CPU. Ever.
Yet it's somehow the standard today ... What went wrong?
(Score: 4, Interesting) by turgid on Tuesday January 06, @12:20PM
Motorola were late with the 68000 so IBM chose the 8088 for the PeeCee. Everyone else used the 68000.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 1) by anubi on Thursday January 08, @03:30AM
... What went wrong?
Intel / IBM / Microsoft wanted to be first.
Even if it was half baked.
"We will sell no wine before it's time" doesn't apply to ambitious software vendors, who apparently go with the "let the customer do the acceptance testing".
Actually, I guess Windows would have been near impossible to make run on a '286. Segmentation registers. The 68000 was far more elegant in both hardware and software.
But they weren't first.
You see what the result was.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 4, Interesting) by VLM on Tuesday January 06, @02:55PM
This was a feature for minicomputer multi-processing and RTOS stuff, essentially you had a fast direct page for everything instead of on chip registers. So you can task switch between processes/threads or respond to interrupts nearly instantly by merely changing your base address, pretty cool and very fast.
There was a fixation on minimalism in the early days. Plenty of 8 bit CPUs (6809, even the z80) had multiprocessing multiuser systems. The idea of being able to print "in the background" or access the disk drive "in the background" was seen as revolutionary far too late for PCs.
The TMS9900 had a reputational issue where it's pretty fast and capable so TI neutered its desktop home computer by ruining its memory system so it wouldn't cannibalize their minicomputer sales. They were ... a little too successful at that neutering LOL. There's a recent Usagi Electric video series on the topic of this pretty cool chip.
(Score: 2, Interesting) by Curlsman on Tuesday January 06, @02:58PM
OK, then what was or is better than X86 and why?
Better compilers or easier to assembler code for?
This is a different topic but probably just as important as this could help guide projects like RISC V.
(Score: 3, Interesting) by ShovelOperator1 on Tuesday January 06, @08:45PM
I can't remember its name, but in 1990s Texas Instruments made a strange 386, or 486, or 386SX. The thing looked like 386SX with a nail's head in it (literally), was labeled 486, and was the strangest and the worst thing I had from the land of 386s.
When viewed by a simple, arithmetic CPU benchmark, it outperformed 40MHz 386DX by about 10-20%. So fast? Not at all. First of all, this thing identified to the system as a 486SX. Then, it could be found that it has only 8K cache, but there were versions with 1K. Then, an assembler programmer could find that the memory access was a disaster. When executing the code to write incrementing bytes to a large part of memory, 286 running at 12MHz could outperform it after first 16 kilobytes and that was strict boundary. Its MMU had to be flawed, and the CPU also required a specific SIMMs configuration to work... efficiently. With 4MB (4x1MB), it was tolerable. With 8MB (8x1MB) too, although it behaved a bit laggy. In 5MB (4x1MB+4x256K) configuration, easily used by many 386 boards, it ran, but slower than this 12MHz 286. The most stable and efficient running was a 1MB configuration (4x256K), but 386s were not purchased to have a 1MB of RAM like some 286, usually 4MB was a good configuration, 8MB was a high-end.
Windows 3.11 could start on it and even be used. Some specific programs, especially scientific software, could suddenly fail, but only when running a 486-compiled code. The CPU identified as 486, but it looks like it had some non-486 behavior. I think of it more and more as a "software defined 486".
The only good thing was the simplicity of the mainboard. It had only 5 chips: CPU, keyboard controller, chipset, BIOS and probably RTC. That was it. The rest were sockets: MMU, SIMMs and ISA slots. The quartz was in the socket too.
Yes, no cache upgradeability too. You got what you had in your CPU. 8kB or 1kB, it depends.
(Score: 3, Insightful) by kaganar on Wednesday January 07, @11:34AM (4 children)
In a large way, CBE was a logical next step for the PlayStation series. GPGPUs hadn't yet fully arrived for this market segment. Playstation 2 enjoyed specialized co-processors, and CBE's "synergistic processing elements" were superior in both flexibility and performance.
Around this time, few PC applications actually supported multithreading and developers were inexperienced with it. Previously game developers were familiar with programming niche hardware processors to gain performance improvements.
If PCs weren't already headed toward symmetric multiprocessing, Microsoft wouldn't have pushed for it in the Xbox 360. But they did, so the trend of developing console games like they were PC games deepened further past the original Xbox, and by the time all the hardware was produced the shift to very commonplace symmetric multiprocessing was on rails.
Consequently, if your game was releasing for the standard PC/Xbox360/PS3 targets, PS3 felt hard because it was the odd one out. Couple that with the PS3's relatively weak GPU and reduced memory, and much hate got slung from less experienced development teams.
Experientially, programming for CBE sucked until you had something like a thread pool system in place for the SPEs. But after that, it was quite delightful to work with. It felt like a step between CPU and GPU programming. The CBE hit a sweet spot between raw performance and flexibility that was unmatched at the time.
Its only real fault is that Sony failed to make the track switch to symmetric multiprocessing for its console series at the right time for the video game industry. Perhaps the ultimate processor branch misprediction. Sony was banking on improving conventional industry norms, and Microsoft doubled down on leveraging its horizontal technologies to disrupt the video game development scene.
For PS4, Sony hard-switched to symmetric multiprocessing and offered excellent tooling and support, essentially beating Microsoft at the new norm they had set.
(Score: 2) by Freeman on Wednesday January 07, @10:40PM (3 children)
The PS3 still held it's own against the XBox360 in sales and the PS4 handily stomped the XboxOne. While I have heard that the PS3 was hard to code for comparatively, other business decisions made up for the problem.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 3, Insightful) by aafcac on Thursday January 08, @05:35AM (2 children)
The PS3 benefited from being on the right side of the Blu-ray HD-DVD format war and XBox360 had supported HD-DVD early on. It was a major reason why I got my PS3 as it was pretty much the cheapest way to get a Bluray player at the time and the fact that it had some pretty good games didn't hurt. There was even a DVD player style remote available for when you wanted to just use it to watch movies.
That being said, there were a few decisions that really angered people like removing the OtherOS feature from systems that people already bought if you wanted any firmware updates and removing PS2 support from later models without really saying much about it. It was something that Nintendo also did with the Wii where the earliest ones had GameCube controller ports and later ones didn't, but I don't think that was as much of an issue.
(Score: 3, Touché) by Bentonite on Thursday January 08, @02:01PM
It wasn't merely what the user wanted, the "PlayStation Network" service was sabotaged to not work unless the console had the latest software installed (apparently some bypasses were found and I'm not sure if those bypasses were later made to no longer work).
Sony did get sued for using their unjust power via arbitrary remote updates to make GNU/Linux no longer boot, despite having originally sold consoles advertised with that feature, but the class action payout was like $10.07 each; https://web.archive.org/web/20230321210218if_/https://gearnuke.com/sony-sending-10-settlement-checks-for-ps3-other-os-lawsuit/ [archive.org]
It took till 2011, which was near the end of standard Wii production (the Wii U was released in 2012) until models without gamecube controller and memory card slots were produced (https://wiibrew.org/wiki/Wii_Family_Edition) and those units still supported gamecube mode;
- Consoles still came with BC and MIOS and thus could go into gamecube mode in software.
- The memory card and controller slots could be soldered onto the pads on the board if wanted (I figure generally people would go and find a used pre-2011 model instead).
- The main breakage was the new disk drive revision would refuse to load gamecube disks - achieving nothing aside from forcing people to run unauthorized copies of gamecube games on a unit with controller ports soldered on.
Ironically, later the Nintendont Wii U homebrew (runs gamecube games in vWii mode) was ported back to the Wii, which actually allows a Wii U gamecube-usb adapter to work for gamecube games on the Family Edition Wii's; https://wiki.hacks.guide/wiki/Wii:Nintendont [wiki.hacks.guide] (which doesn't actually use gamecube mode (which downclocks the CPU and DSP to GC mode) - it's a bunch of timing hacks and patches, but most gamecube games still run).
Nintendo didn't sabotage the ability to run gamecube games on consoles that were originally sold with that feature.
There were also other hardware features removed from later models - the Wii originally was meant to support DVD's playback (but nintendo balked at the licensing costs and the difficulty in getting DVD playback reliably working) and the drives would originally read DVD-R's (allowing playback of DVD's with https://wiibrew.org/wiki/MPlayer_CE [wiibrew.org] (it took a bit of work to get ffmpeg to run fast enough for real-time playback and well DVD's are 480p (or 576p) after all) or backups with a backup loader) - but later revisions refuse to read DVD-R - those only read disks with the DVD-incompatible block encoding (Wii disks didn't run backwards - their block encoding is custom, so when a DVD-drive reads the blocks and does ECC, the block is determined to be corrupt and rejected, but of course someone wrote a DVD-drive dumper; `git clone https://github.com/bradenmcd/friidump`) [github.com] and a correct Burst Cutting Area - something only meant to be possible for pressed disks, but someone did managed to program a DVD-R writer to write blocks in the needed format - but just didn't succeed in burning a valid BCA.
Newer hardware revisions from 2008 and onwards patched Boot1 to no longer use strcmp() to check binary SHA-1 hashes, thus making it no longer possible to install BootMii into Boot2.
Although, the found workaround was to place an unsigned program before the systemenu (preloader) and IOS would just load it without doing a signature check - allowing the launching on BootMii/IOS on boot, which was later modified into; https://wiibrew.org/wiki/Priiloader [wiibrew.org]
The latest revision, the "Wii Mini" removed pretty much everything in attempt to prevent modding, including the SD card, internet, Wi-Fi and 2nd USB port, but of course the pads were found and modding was done; https://wiibrew.org/wiki/Wii_mini [wiibrew.org]
Although ninty did try their best to sabotage homebrew in released updates (which would come in a partition on game disks and some games would request an update to install the needed IOS and also a new system menu version - but a patch was developed to allow refusing such updates and allow running the games after manually installing the needed IOS, or optionally a system menu update could be installed via the "wii shop channel" - but it wasn't assumed the user had an internet connection).
Ninty even released a Boot2 update solely for overwriting any installed copy of BootMii/Boot2 (https://wiibrew.org/wiki/BootMii), but of course the update code was bungled - there was a chance both copies of Boot2 would be corrupted on install and at least one console that didn't have homebrew was bricked by the update (dozens likely were), with the goal of preventing homebrew and thus and thus prevent the unofficial execution of GNU/Linux - but they didn't succeed.
There was always a way to install whatever versions of software and patch the shop channel to work with whatever systemmenu, as the "security" design was a joke (there was an undocumented AHBPROT mode found implemented to make factory installs easier, that gives PPC access to all the hardware, when such access was meant to be restricted by ARM; https://wiibrew.org/wiki/Wii_system_flaws). [wiibrew.org]
But it's all a cat and mouse game and all proprietary stuff - many of the homebrew developers released and are still releasing proprietary software, even intentionally infringing the GPLv2 at times (the biggest proprietary software developer was concerned when it was determined that it was possible that DevKitPPC was a derivative work of a GPLv2-only work (and thus was covered by the GPLv2-only as well a a weak license), as he didn't like the possibility that he would have to release all of his software as free software to continue distributing it - too bad it turns out that the RTOS DevKitPPC copied from was also available under a weak license and nothing is ever done if the terms of a weak license are violated).
As you can see, clearly you're better off just installing GNU/Linux-libre on a real computer and doing whatever you want without restrictions.
(Score: 2) by Freeman on Thursday January 08, @02:43PM
I have to admit that it having Blu-ray was at least a reason I was able to put forth to the Mrs. for buying it. Planet Earth looked nice and I got a sweet console at a time when I actually had a friend that could come over and play. At the time, consoles were the best single-device multiplayer experience. Nowadays, single-device multiplayer game selection is essentially just as good as consoles. Unless you want to play Nintendo games, in which case you should just get a Switch. Even there, a lot of the multiplayer games require a 2nd Switch console.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"