Raymond Chen recently posted a ten-part introduction to the ia64 architecture. Rapidly teaching me that while I might be able to write a brainfuck to perl compiler in a few minutes, there's no way in a million years that I'll ever be able to write a good compiler that targets ia64.
The Itanium is a 64-bit EPIC architecture. EPIC stands for Explicitly Parallel Instruction Computing, a design in which work is offloaded from the processor to the compiler. For example, the compiler decides which operations can be safely performed in parallel and which memory fetches can be productively speculated. This relieves the processor from having to make these decisions on the fly, thereby allowing it to focus on the real work of processing.
(Score: 2, Informative) by Anonymous Coward on Saturday August 08 2015, @11:50PM
> there's no way in a million years that I'll ever be able to write a good compiler that targets ia64.
The reasoning behind the design of EPIC was to move complexity into software so that it could benefit from faster turn around times than hardware could. Sort of like the way the quality of h264 encoders has gotten significantly better since the standard was introduced but yet a fully complaint h264 decoder from 10 years ago can decode the output of the most modern, optimized h264 encoder today and get all the picture-quality improvements that come with it.
It might have worked too, if it weren't for AMD making x86-64 the mass-market option and thus redirecting much of the compiler talent away from what became a niche architecture. There just was never enough ia64 volume to attract enough devs to work on VLIW compiler design.
(Score: 1, Informative) by Anonymous Coward on Sunday August 09 2015, @12:05AM
The issue was intel only using their 'legacy' processes to produce itanium chips and not providing 'entry level' systems to gain the software developer mindshare with.
Additionally from everyone I've talked to who considered developing for it, it only makes sense if you have a lot of PA-RISC experience to begin with, which was even more niche than itanium.
(Score: 0) by Anonymous Coward on Sunday August 09 2015, @12:15AM
> The issue was intel only using their 'legacy' processes to produce itanium chips and not providing 'entry level' systems to gain the software developer mindshare with.
Because they thought they could own the market - they were caught unaware by x86 moving into the workstation and server space. Itanium was intended to kill MIPS/SPARC/POWER and replace PA. Instead x86-64 took marketshare from all of them and Itanium was left with a tiny piece of the pie - for chips that cost a fuckload to manufacture.
> it only makes sense if you have a lot of PA-RISC experience to begin with, which was even more niche than itanium.
Eh. The joke was that itanium was PA-3.0. But all the stuff that made EPIC hard was brand new. PA-2.0 was a pretty typical RISC, not terribly different from the other RISC architectures of the day.
(Score: 3, Interesting) by turgid on Sunday August 09 2015, @11:37AM
Back in 1999 when there were still several CPU architectures to choose from (before itanic hype killed them off) Sun did a port of Solaris to itanic. It didn't take long since the Solaris code base was pretty portable, however Sun didn't want to kill off SPARC so it was never released. It's a shame that Sun wasn't very good at getting it's own CPUs out. Look at what Fujitsu was able to achieve with SPARC...
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 0) by Anonymous Coward on Sunday August 09 2015, @01:12AM
It might have worked too, if it weren't for AMD making x86-64 the mass-market option
The sad thing is AMD has nothing to show for it twelve years later. Hell, just the fact that you refer to it as x86-64 instead of amd64 shows how badly they were beaten.
(Score: 0) by Anonymous Coward on Sunday August 09 2015, @01:21AM
> Hell, just the fact that you refer to it as x86-64 instead of amd64 shows how badly they were beaten.
Eh, I wouldn't make such a big deal of it. The official Intel name for the 32-bit instruction set was ia32. So it isn't like Intel's got their name on it either.
(Score: 5, Insightful) by Francis on Sunday August 09 2015, @02:08AM
That's more a failure of the DoJ to enforce antitrust regulations against Intel. Intel got basically a slap on the wrist for paying systems integrators not to use AMD chips. Sort of the same way that nVidia now is effectively paying game developers not to optimize for AMD chips.
AMD did some stupid things as well, such as overpaying for ATI, but AMD was well out ahead of Intel with their multi-core processors and then later with APUs. But, really, there's only so much you can do when the competition is able to resort to good old fashioned antitrust violations to kill your products.
(Score: 0) by Anonymous Coward on Sunday August 09 2015, @11:07AM
AMD shares a fair bit of the blame for their problems.
I bought AMD when back when they first did the AMD64/opteron/Athlon64 stuff and were clearly better in many ways (except for that TSC timing issue). And I think very many others bought those too.
But after that they seemed to rest on their laurels and later produced rather crappy stuff:
http://arstechnica.com/gadgets/2011/10/can-amd-survive-bulldozers-disappointing-debut/ [arstechnica.com]
"Indeed, in some tests, Bulldozer can't even keep up with its predecessor. The launch of the Phenom in 2007 was similarly underwhelming—it arrived late, broken, and slow—but AMD managed to turn things around with Phenom II to produce a viable competitor to many of Intel's processors."
So given the crap they were churning out if you weren't a fanboy it didn't take any antitrust action from Intel for you to prefer Intel instead of AMD.
(Score: 1) by Francis on Sunday August 09 2015, @02:17PM
It's not a matter of being a fanboy, there were numerous points along the way where AMD was ahead and was unable to get much benefit from it due to antitrust laws being broken by the competition.
Bulldozer was unfortunate, but you're failing to account for how revolutionary they're goal was. I had the previous chip and it kicked the teeth out of what Intel was doing at the time. Even now, Intel depends on fixed benchmarks to keep their lower end chips looking like they're keeping up.
(Score: 2) by albert on Sunday August 09 2015, @04:16PM
There was no way AMD could produce enough chips. They'd have had to outsource production.
(Score: 3, Insightful) by Grishnakh on Sunday August 09 2015, @01:37AM
It might have worked too, if it weren't for AMD making x86-64 the mass-market option and thus redirecting much of the compiler talent away from what became a niche architecture. There just was never enough ia64 volume to attract enough devs to work on VLIW compiler design.
Oh please, Itanic was doomed from the start. Itanic systems were horrifically expensive, and aimed at competing with high-end Unix server hardware; they weren't remotely affordable. Why would compiler writers bother with them, unless they're being paid by Intel to do so? If there was a problem with compiler talent, that's Intel's fault for not hiring them in the Itanic compiler team. You're not going to get volunteers to work on something like that, nor are you going to even get commercial developers (say, at Microsoft) to spend much time with it, because they'd rather target the much, much, much, much larger market that is regular PCs and servers running x86 ISA.
(Score: 0) by Anonymous Coward on Sunday August 09 2015, @02:12AM
Big companies just aren't good at doing that kind of research. They are good at adding incremental features to their products, and knocking off features in competitor's products that they can test out in the lab.
Usually what happens is that this kind of research is conducted at several universities, until someone comes up with a promising prototype, publishes a big paper and gets hired away. But this cycle happens at academic speeds (slow), not IT industry speeds (fast). AMD made sure that Intel wouldn't get all the time in the world to get it right, as some other poster pointed out.
(Score: 0) by Anonymous Coward on Sunday August 09 2015, @02:15AM
At the time that itanium was conceived only the lowliest of servers ran x86. Anybody doing data-center work was running RISC systems. The strategic marketing group boffed it. Even then itanium was over a decade ahead of any x86 implementation of high-available functions like memory scrubbers, ECC on cache, ECC on i/o lines and lockstep execution across CPUs.
(Score: 2) by turgid on Sunday August 09 2015, @10:18AM
At the time of the itanic hysteria, while all the big Unix server companies were slitting their own throats having drunk intel's and HP's marketing Kool Aid, there were some voices of reason. Linus was particularly insightful [yarchive.net] as always.
I spoke to a guy from Sun who'd been in the business since the 1970s and he told me an interesting story about itanic. It was a research project that never lived up to the promises, and all it needed was "better compilers." We're still waiting.. The problem with itanic is that it's a scaled up DSP. It's only good for executing FORTRAN loops. In real-world general use, it can't cope with highly dynamic work loads. See Linus' comments above.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 0) by Anonymous Coward on Sunday August 09 2015, @11:19AM
The problem with itanic is that it's a scaled up DSP. It's only good for executing FORTRAN loops.
From what I saw it was very good at "embarrassingly parallel workloads" and not really good at other stuff.
The problem with that was it was very expensive and power hungry, and embarrassingly parallel workloads are also easily run on multiple AMD64 CPUs/nodes (which were cheaper and performed better when you got messier workloads). For the price of one of those HP Itanic servers you could buy two or more AMD64 machines and do that parallel workload faster.
Thus the Itanic only made sense for a few people.
(Score: 0) by Anonymous Coward on Sunday August 09 2015, @11:43AM
I knew a Debian guy who had an itanic server for doing the itanic build. He used to use the waste heat for drying his clothes.
(Score: 0) by Anonymous Coward on Sunday August 09 2015, @02:16PM
(Score: 0) by Anonymous Coward on Sunday August 09 2015, @05:25PM
> I spoke to a guy from Sun ...
> It's only good for executing FORTRAN loops. In real-world general use, it can't cope with highly dynamic work loads.
That's funny that you took the word of a competitor as gospel. From the second generation onward Itanium had some of, and often the best, TPC/$ benchmarks. That's Transaction Processing Council which is basically a database benchmark. Sparc, on the other hand, always trailed the pack relying on the inertia of the market, but they could only coast for so long which is part of the reason sun is now owned by one real asshole called larry ellison.
(Score: 2) by turgid on Sunday August 09 2015, @09:23PM
I didn't take what he said as gospel. I'm not religious. But it did concur with what other people (computer scientists and engineers) were saying at the time. And despite all the hype, I never actually saw a working itanic system but I spoke to some guys who rented one for running some FORTRAN that they'd written to simulate nuclear reactors.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].