from the failed-a-'performance'-review? dept.
Intel's director of high performance computing has left the company after 27 years:
Reinders describes how he joined Intel in 1989 to work on a VLIW (Very Long Instruction Word) processor called iWarp, designed to be connected into a cluster. It was the early days of a search for higher computing performance via parallelism rather than faster clock rates.
According to Reinders, Intel's work on parallelism eased back when clock rates surged again with the 486 and Pentium processors, but that was only temporary. Reinders became a tireless champion for concurrency as well as for Intel's compilers, libraries and other software development tools.
Not everything went well. Intel's general-purpose GPU and accelerator project, codenamed Larrabee, never came to market. However parts of Larrabee were used in Intel's MIC (Many Integrated Core) concurrent processor, which became Xeon Phi, codenamed Knights Corner, fully released in 2012. China's Tianhe-2 supercomputer, the world's fastest according to the Top 500 list, uses Xeon Phi accelerators.
(Score: 3, Funny) by Some call me Tim on Wednesday June 08 2016, @04:06AM
He should change his first name to Rudolph, then they would have to take him seriously! ;-)
Questioning science is how you do science!
(Score: 0) by Anonymous Coward on Wednesday June 08 2016, @05:18AM
Rudolph Reinders would have been fired for drunkenness because the name Rudolph Reinders says red-nosed at work obviously.
(Score: 0) by Anonymous Coward on Wednesday June 08 2016, @08:38PM
No, they just wouldn't have let him play in any Reinders games.
(Score: 0) by Anonymous Coward on Wednesday June 08 2016, @04:56AM
Should have been trendy, hip, and with it! Should have championed NSFW instead: the Networked Social Facebook Web!
(Score: 1, Insightful) by Anonymous Coward on Wednesday June 08 2016, @06:53AM
Nothing wrong with championing concurrency, but creating EPIC failures are a different matter ;).
(Score: 2) by TheGratefulNet on Wednesday June 08 2016, @03:10PM
I was at SGI during the time that NT4.0 was still the MS choice for servers and SGI was going thru changes, getting rid of irix, testing out linux and trying to see if win NT on an SGI box would be a big seller.
people at sgi called this new box the 'wbt' or the wintel box thing (not kidding). it didn't even use standard dimm modules, they were strange half modules I jokingly called them 'dimm-lets' ;)
the pci bus was 3.3v based when almost nothing was 3.3v back then. they used 'backwards' pci slots because of the 3.3v levels and customers complained, thinking we installed the slots backwards.
there was no bios, it booted direct to some sgi monitor thing.
it was a failure, of course. and it was the last chance SGI had to do anything to stay alive. this was just before y2k days, iirc.
I loved working at SGI and miss that culture. google took over that campus and turned a fine engineering area of mtn view into Advertising and Spying Central (tm). sigh..... ;(
"It is now safe to switch off your computer."
(Score: 2) by TheGratefulNet on Wednesday June 08 2016, @03:56PM
oh, and the reason I mentioned all that, was that the itanium was supposed to be a big savior for SGI. SGI had bought MIPS and also cray, but they were starting to get tired of MIPS for some reason.
I think sgi did end up being one of the only companies (along with hp) to build with itanium but it never took off, of course.
"It is now safe to switch off your computer."
(Score: 0) by Anonymous Coward on Wednesday June 08 2016, @11:16PM
For anyone curious to know more, the proper name for those systems were the SGI 320 (dual P3) and SGI 540 (quad Xeon). They were part of the SGI Visual Workstation line but were the only two systems in the line that differed from generic PCs. Both the 320 and 540 used the 1600SW monitor, an impressive LCD for 1998.
I used one of the 540s as a workstation for a few years. Neatest drive cover I've ever encountered on a workstation.
SGI Visual Workstations: https://en.wikipedia.org/wiki/SGI_Visual_Workstation [wikipedia.org]
SGI 1600SW LCD: https://en.wikipedia.org/wiki/SGI_1600SW [wikipedia.org]
Opening the drive cover: https://www.youtube.com/watch?v=S4cQJtbqjKk [youtube.com]
(Score: 2) by shortscreen on Wednesday June 08 2016, @08:45AM
Imagine wrestling with complex, cutting edge CPU tech for months at a time, making darn sure there are no bugs in the design, and then hoping that some of those funny looking little pieces of glass that come back from the manufacturing plant actually work.
"Woohoo! a successful product launch!"
The day after that, developers release something that runs utterly terrible on it.
(Score: 1, Interesting) by Anonymous Coward on Wednesday June 08 2016, @09:49AM
But if I was involved with the Itanic I would be too ashamed to publicly admit to it, maybe only to close friends who I can confess my darkest secrets to.
(Score: 0) by Anonymous Coward on Wednesday June 08 2016, @07:52PM
The day after that, developers release something that runs utterly terrible on it.
I call this the "tower of babel" design fail pattern. You build something amazing. It is technically very cool. But meets the needs of pretty much no one. I have seen hundreds of SDKs and hundreds of proto boards do the same thing. Computer after computer built and designed but 3 people buy then thing when they needed 50k to buy it to get even ROI.
This sort of tower of babel boondoggle has its place. But many times it is just code and hardware that will never be used.
The sort of hardware you are talking about depended on a superior compiler to exist. It didnt. Probably still doesnt. Most code just has too much interdependence to pull it off. I am surprised they got up to 4 sub instructions retired per clock in their current design in some cases. There are the 'stupidly parallel cases' and those are very interesting. But most things are not that way. That is why GPUs have dominated that market.
Intel missed the low power revolution. They forgot their customers. That is why ARM is eating their lunch. Not because ARM is 'better'. It fulfills what their customers want. They want their phone to last more than a day on a charge. Intel missed that boat when they sold off XScale then tried to shoehorn a 20 year old CPU design into being low power.