Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Thursday October 24, @09:17PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Intel had a solution ready to add 64-bit features to the "classic" 32-bit x86 ISA, but the company chose to push forward with the Itanium operation instead. A new snippet of technology history has recently emerged from a year-old Quora discussion. Intel's former "chief x86 architect," Bob Colwell, provides a fascinating tidbit of previously unknown information.

AMD engineer Phil Park was researching the history behind the x86-64 transition, when he discovered the conversation. Colwell revealed that Intel had an inactive internal version of the x86-64 ISA embedded in Pentium 4 chips. The company's management forced the engineering team to "fuse off" the features.

The functionality was there, but users could not access it. Intel decided to focus on the 64-bit native architecture developed for Itanium instead of x86-64. The company felt that a 64-bit Pentium 4 would have damaged Itanium's chances to win the PC market. Management allegedly told Colwell "not once, but twice" to stop going on about 64-bits on x86 if he wanted to keep his job.

The engineer decided to compromise, leaving the logic gates related to x86-64 features "hidden" in the hardware design. Colwell bet that Intel would need to chase after AMD and quickly implement its version of the x86-64 ISA, and he was right. Itanium CPUs had no native backward compatibility with 16-bit and 32-bit x86 software, so the architecture was one of the worst commercial (and technology) failures in Intel's history.

[...] Bob Colwell made significant contributions to Intel's history, managing the development of popular PC CPUs such as Pentium Pro, Pentium II, Pentium III, and Pentium 4 before retiring in 2000. Meanwhile, today's x86 chips marketed by Intel and AMD still retain full backward hardware compatibility with nearly every program developed for the x86 architecture.


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Touché) by JoeMerchant on Thursday October 24, @10:44PM (1 child)

    by JoeMerchant (3937) on Thursday October 24, @10:44PM (#1378562)

    I bought one of those new AMD 64 bit processors and equipped it with 8GB of RAM and Gentoo 64 bit.

    AMD was also far ahead (lower) in power consumption per MIPS.

    I bet heavy on AMD call options, I was wrong about that.

    --
    🌻🌻 [google.com]
  • (Score: 3, Interesting) by RamiK on Friday October 25, @12:17AM (11 children)

    by RamiK (1813) on Friday October 25, @12:17AM (#1378565)

    The first Pentium 4-branded processor to implement 64-bit was the Prescott (90 nm) (February 2004), but this feature was not enabled. Intel subsequently began selling 64-bit Pentium 4s using the "E0" revision of the Prescotts, being sold on the OEM market as the Pentium 4, model F. The E0 revision also adds eXecute Disable (XD) (Intel's name for the NX bit) to Intel 64. Intel's official launch of Intel 64 (under the name EM64T at that time) in mainstream desktop processors was the N0 stepping Prescott-2M.

    ( https://en.wikipedia.org/wiki/Pentium_4 [wikipedia.org] )

    For reference, although AMD introduced the x86-64 ISA around '99, they only released their first chip a year before Intel:

    The first AMD64-based processor, the Opteron, was released in April 2003.

    ( https://en.wikipedia.org/wiki/X86-64 [wikipedia.org] )

    So, either way Intel wouldn't have released before AMD. Well, unless they had the feature available and fused before the Prescotts... But that seems rather far fetched. No?

    --
    compiling...
    • (Score: 5, Insightful) by owl on Friday October 25, @03:07AM (3 children)

      by owl (15206) on Friday October 25, @03:07AM (#1378580)

      Well, unless they had the feature available and fused before the Prescotts... But that seems rather far fetched. No?

      It's possible they did have it. But given that at the time Intel was neck deep in the Itanium swamp as the official Intel blessed path to 64-bit processing, it seems unlikely. And if they secretly did, keeping the feature fused off would have been to prevent their x86 chips from (in their mind) cannibalizing sales of the Itanium.

      What is more likely is that up until AMD released the x86-64 spec, Intel had zero plans to move x86 to 64-bit (as that path was supposed to be by going Itanium) and they may have stuffed it into a P4 revision after the spec. was released but left it fused off (again, to prevent cannibalizing Itanium) while they waited to see what the reaction would be when AMD did finally release a chip. I.e., cover their bets. If 64 bit x86 took off, they could "unfuse" in a new P4 release and be able to offer a chip. If 64 bit x86 fell like a lead balloon, they could just leave it fused off and no one would be the wiser, and Itanium would still be the official "64-bit path".

      • (Score: 3, Touché) by JoeMerchant on Friday October 25, @11:37AM (2 children)

        by JoeMerchant (3937) on Friday October 25, @11:37AM (#1378602)

        I realize that 640k is all the RAM that normal users will ever need, but by the early 2000s it was pretty obvious, even without video applications, that 32 bits of address space was only sufficient for normal users because of the (falling) current price of RAM.

        Like 16 bits before it, once more than 2GB of RAM came down to mainstream price points, the pressure to expand 32 bit addressing would soon become overwhelming.

        At least we didn't have to endure a 286 like paged addressing mode as an interim step on the way from 32 to 64 bits.

        --
        🌻🌻 [google.com]
        • (Score: 2) by owl on Saturday October 26, @09:04PM (1 child)

          by owl (15206) on Saturday October 26, @09:04PM (#1378847)

          All true. But at the time Intel very much desperately wanted you to migrate to Itanium for 64-biit addressing in a CPU. If they (Intel) had wanted to expand x86 to 64-bits they could easily have done so themselves, likely well before AMD released their extension (which is the one we are all using today in x86 powered systems). Intel had no difficulty going from 8-bit (8080) to 16-bit (8086) to 32-bit (80386). They certainly could have made a 32->64 bit jump on their own.

          But Itanium was what was distracting them at the time (just like the iAPX432 before distracted them and is why there was even a 8086 in the first place). And that rabid attachment to Itanium let AMD one-up them in producing a 64-bit x86 extension.

          • (Score: 2) by JoeMerchant on Saturday October 26, @09:49PM

            by JoeMerchant (3937) on Saturday October 26, @09:49PM (#1378851)

            I think I was intel marketing hubris, trying to segment 32/64 as a consumer/professional demarcation.

            Problem is the consumer market is spear headed by budget performance enthusiasts who weren't going to settle for 32 bit for long.

            I wonder how much of the "32 bit is faster" press flak at the time was intel backed to push that concept

            --
            🌻🌻 [google.com]
    • (Score: 3, Interesting) by turgid on Friday October 25, @10:38AM (6 children)

      by turgid (4318) Subscriber Badge on Friday October 25, @10:38AM (#1378599) Journal

      In 2001 I sat in a very interesting lecture by a guy from SuSE talking about the AMD64 architecture and porting Linux to it. He had a demo of Linux running in an emulator running on a 32-bit Athlon. It was pretty obvious that AMD64 was a very nice, rational extension to x86 and that by comparison itanium was ludicrous.

      • (Score: 3, Insightful) by RamiK on Friday October 25, @12:44PM (5 children)

        by RamiK (1813) on Friday October 25, @12:44PM (#1378604)

        I've been following RISC-V profile ratification process long and close enough that I can tell you with some certainty that there's years apart between running linux in an emulator of a planned modification to an ISA and running it on actual hardware even nowadays. And that's years following the actual hardware design and verification...

        Incidentally, RVA23 just got ratified and Google stated it will be the baseline ABI requirement for Android: https://www.edn.com/rva23-profile-ratification-bolsters-risc-v-software-ecosystem/ [edn.com]

        --
        compiling...
        • (Score: 3, Touché) by turgid on Friday October 25, @01:17PM

          by turgid (4318) Subscriber Badge on Friday October 25, @01:17PM (#1378606) Journal

          By 2001 they had gcc, binutils, the kernel, GNU user land and an X server running.

        • (Score: 3, Interesting) by turgid on Friday October 25, @01:23PM (3 children)

          by turgid (4318) Subscriber Badge on Friday October 25, @01:23PM (#1378607) Journal

          And by 2003 the Opteron was on the market. By the way, the Solaris port took six weeks.

          • (Score: 2) by RamiK on Friday October 25, @02:53PM (2 children)

            by RamiK (1813) on Friday October 25, @02:53PM (#1378615)

            By 2001 they had gcc, binutils, the kernel, GNU user land and an X server running.
            And by 2003 the Opteron was on the market. By the way, the Solaris port took six weeks.

            When AMD announced the ISA extension in '99, they already had emulators and a softcore in a simulator. So, that's two years to finish the compiler port and port the kernel and another year to tape out. And that's when you're starting out with working emulators and sims. Intel started with as spec sheet.

            So, I stand by my first statement: Intel probably couldn't have released before AMD.

            --
            compiling...
            • (Score: 2) by turgid on Friday October 25, @02:58PM

              by turgid (4318) Subscriber Badge on Friday October 25, @02:58PM (#1378616) Journal

              Well, AMD came up with it, so I'd expect them to have been first. Remember Yamhill and Intel Core?

            • (Score: 2) by owl on Saturday October 26, @09:11PM

              by owl (15206) on Saturday October 26, @09:11PM (#1378849)

              Intel probably couldn't have released before AMD.

              Had Intel not been distracted by Itanium and instead focused on "extending x86 to 64-bit' they very likely could have beat AMD to the punch. As it was they poured a huge amount of engineering into Itanium, and from an outside observation viewpoint, it appeared that they viewed 32-bit x86 as "the end of the road" for the x86 lineage at the time.

              Imagine if all the engineering that was poured into Itanium had instead been poured into "lets extend x86 to 64 bits". Had they done so, then likely the only reason they would have been beaten by AMD at the time would have been from simply waiting too late to start the work. But if they'd started an "x86 extension" at the same time they started Itanium, and instead of Itanium, it is much more likely they would have had the design, and chips, out before AMD's spec. appeared.

              Of course they did not do this, so this is all just speculation. But a lot could have been done on an x86 extension if the Itanium effort had instead been "lets extend x86 yet again".

  • (Score: 5, Interesting) by ledow on Friday October 25, @07:27AM (4 children)

    by ledow (5567) on Friday October 25, @07:27AM (#1378592) Homepage

    Intel chose to "reinvent" the instruction set - presumably for consistency and to remove legacy - rather than continue backwards compatibility.

    That's what it boils down to. And it's not an inherently bad intention in itself, removing cruft and starting with an architecture designed for the purpose, but they completely mishandled it.

    Millions of customers worldwide would have to change processor, board, maybe even RAM, etc. in every single machine they owned, plus rewrite ALL their software.

    Or they could use AMD and just carry on as normal and transition on their own timescale.

    That's ultimately how it went down on a small business / individual level. Nobody in those places cared one jot about Itanium and never would. Even enterprises baulked at it, and they had the resources, funding and expertise to actually take it on.

    In the same way that the "IBM-compatible PC" originated, it was little to do with what Intel, or IBM, or Microsoft wanted. It was to do with what customers could reasonably obtain and not have to throw everything they already had away. It was determined by the smaller end of the market once it hit mainstream, not enterprise.

    Intel could have released - at any time - a slightly non-compatible 64-bit instruction set for x86 and, if they had wanted to, they could have blown AMD out of the market and forced them to catch back up and abandon their own version of it. All they had to do was release a 64-bit instruction set that was slightly different and not compatible, just at the time that AMD were releasing their processors, forcing them to go back to the drawing board. In essence, it actually happened entirely the other way around, and Intel were forced to throw Itanium away because a niche competitor at the time had released what customers wanted but importantly - Intel did not have a response at the time. It was Intel who had to go back and become compatible with AMD's new instructions and play catch-up.

    So it wasn't Intel being evil (which is a surprise), it wasn't them trying to screw over customers (in a way they were trying to help), it was probably a decision championed by technical people and by marketing people alike inside Intel. But in terms of overall image and customer understanding it was a failure.

    People weren't just going to throw out 40 years of architecture for no reason, when a product literally existed that a) ran everything just fine without having to emulate (also Transmeta's failing, in essence), which b) already was on the market and c) did fancy new powerful stuff too.

    It was a misstep by Intel which could have worked if AMD hadn't got there.

    But it was driven almost entirely by customers NOT wanting to have to upgrade literally everything they had all over again for no technical reason that would affect them.

    A lesson that many technology companies should learn from.

    • (Score: 3, Interesting) by turgid on Friday October 25, @03:12PM

      by turgid (4318) Subscriber Badge on Friday October 25, @03:12PM (#1378619) Journal

      Itanium was over-engineered, overcomplicated rubbish. It was a triumph of Marketing and MBAism over Engineering. It was designed to be too complicated to close and had it special secret IP protected six ways to Sunday so that potential competitors couldn't afford to license it.

      It also relied on magical compilers which never really appeared (hint: a compiler can't predict the future at compile time).

      The suits bought into it, and it did most of its job, that is it killed MIPS, PA-RISC, Alpha, Clipper etc. It didn't kill UltraSPARC or POWER. ARM survived because it excelled at low power applications.

    • (Score: 3, Insightful) by turgid on Friday October 25, @03:23PM (2 children)

      by turgid (4318) Subscriber Badge on Friday October 25, @03:23PM (#1378621) Journal

      There was one other thing, and that was intel's x86 marketing. They were very good at disuading people not to buy competitors' x86 chips (Cyrix and AMD) back in the day. I was taken in by it too. The computer press always reported Intel chips' performance in a good light and there was always that nagging suspicion that a non-intel x86 CPU wouldn't be quite compatible.

      When AMD brought out the 32-bit Athlon, they showed Intel up completely in terms of performance. Software ran flawlessly too. All doubt was gone. There was no reason AMD64 wouldn't be a success, and very little reason to go to Itanium.

      • (Score: 3, Informative) by VLM on Friday October 25, @04:29PM

        by VLM (445) on Friday October 25, @04:29PM (#1378629)

        Fitting in with the marketing theme

        Itanium CPUs had no native backward compatibility with 16-bit and 32-bit x86 software

        IIRC some of the marketing had a theme that Itanium would be so incredibly fast that it would be able to emulate a 32-bit CPU either fast enough or faster than legacy hardware.

        The disaster of Itanium is that it was, indeed, considerably faster than legacy 32 bit processors, however, it was not fast enough to emulate legacy procs either adequately or faster than real time.

        Even poor emulation performance would have been "usable" because most software is not speed critical. If edlin.com has 4 ms of latency instead of 0.25 ms of latency no human would notice or care, while something like a database core engine would run rewritten recompiled very fast code maybe 4 times faster than legacy 32 bit processors. However, it was dismally unusably slow IIRC. I only got to play on a couple Itanium machines when they were new and not for long.

        I seem to recall there was also a product tie-in with Y2K. Not exactly a Y2K scam but close. They announced in the early 90s they'd be shipping in time for Y2K. Then in the late 90s we'll be replacing everything with Y2K compliant hardware so why not buy some Itanium stuff we need all new software versions anyway because "Y2K". Unfortunately shipping dates slipped until about the dotcom crash. Too late for the Y2K feeding frenzy but just in time for the dotcom recession in sales.

        The most amazing thing about Intel is their history is just a long list of disasters that would normally destroy any other company, but like a nuclear war-proof cockroach Intel just never dies. I'm sure they'll keep right on doing that until they eventually do go out of business. Other companies at least occasionally ship a stellar amazing product. Not Intel. Its like their companies "secret sauce" is surviving disasters. Other companies sometimes ship fast stuff, cheap stuff, cool stuff, stylish stuff, but not Intel, their superpower is you can spray an entire can of Raid (bug spray) at Intel after nuking it, but that roach never dies. Not yet, anyway.

      • (Score: 3, Insightful) by stormwyrm on Friday October 25, @07:28PM

        by stormwyrm (717) on Friday October 25, @07:28PM (#1378669) Journal
        The end came when Microsoft gave its blessing to x86-64 (they still insist on calling it x64 though lol). That was the fatal iceberg that sunk the Itanic. Intel could have provided some compatibility layer that allowed 32-bit code to run in virtualized mode inside the processor to allow for some kind of migration path, the way the 80386 had a Virtual 8086 mode that allowed 16-bit MS-DOS applications to run properly even in 32-bit protected mode. I think they attempted to do some kind of dynamic translation that could run 32-bit x86 code by converting it on the fly to Itanium instructions, but I don't think it ever performed anywhere near as well as they hoped, with performance of at most a 100 MHz Pentium in an age when gigahertz clock speeds were typical. That the entire architecture was also so awfully expensive didn't help matters. You might as well use one of the other incompatible RISC architectures available as IA-64 offered no tangible advantages. Soon after AMD offered their own solution to the 32-bit to 64-bit migration path that was cheaper, more performant, and had better compatibility than IA-64. Microsoft accepted it and at that point Itanium could ever only hope to be at best a niche architecture just like PA-RISC, Alpha, or SPARC, perhaps seeing widest use in enterprise data centres or HPC clusters, and it even struggled in that market.
        --
        Numquam ponenda est pluralitas sine necessitate.
(1)