Arthur T Knackerbracket has processed the following story:
For a company that has traditionally promoted CEOs from within its ranks, considering an outsider to lead would mark a significant departure from Intel's long-standing practices. Since its founding in 1968, only one CEO – Bob Swan in 2019 – was hired externally, and even he was regarded as a temporary solution following Brian Krzanich's resignation.
As for potential successors, external candidates remain largely speculative. However, rumors suggest Marvell CEO Matt Murphy is among those being considered.
On the internal front, CFO David Zinsner and interim co-CEO MJ Holthaus are reportedly in the running, while recently departed board member Lip-Bu Tan has also been approached about the position.
Intel's openness to outside leadership underscores the challenges it faces and the urgency to set its turnaround plans into motion.
Under Pat Gelsinger's brief tenure, following his move from VMware in 2021, Intel continued to lose market share to competitors like Nvidia while grappling with persistent product delays and manufacturing challenges. This year alone has been marked by a whirlwind of setbacks, including poor financial performance, job cuts, CPU crashes, and yield issues.
Compounding these struggles, analysts predict that Intel's ambitious efforts to revamp its manufacturing processes won't deliver significant financial improvements until at least late 2025. Given this backdrop, it's unsurprising that Intel's board is seeking a fresh, external perspective to steer the company toward recovery.
Bloomberg reports that Intel may also consider former executives who departed during previous leadership transitions. Such candidates could offer a mix of internal familiarity and fresh perspectives. Among the names floated are ex-CFO and current board member Stacy Smith, former PC unit head Gregory Bryant, Ampere Computing CEO and one-time Intel president Renee James, and Kirk Skaugen, who previously led the company's data center business.
Another intriguing possibility involves executives from Intel's key customers with in-house chipmaking expertise. For instance, Apple's Johny Srouji could emerge as a strong candidate, bringing insights from one of the industry's most successful chip design operations.
(Score: 5, Insightful) by Frosty Piss on Monday December 09, @09:31PM
If the beginning of the end isn't already in progress, it's starting now. They will fill the position with a very hated MBA PHB like the one who just got iced in NYC. Intel is done, maybe HP can buy them to add to the shit consumer craptoverse they've become.
(Score: 4, Informative) by DannyB on Monday December 09, @10:45PM (2 children)
Mr Intel leaving Intel is not a great sign... for Intel
I remember when Itanium was widely called Itanic. Then AMD came out with amd64.
It sounds like they just kept doing the thing that initially made them successful. Then pumping up clock speeds by using more power and heat. Trying to beat ARM with Atom, but Atom had all of the legacy baggage. They never seemed to recognize that maybe they ought to shed some of the oldest dinosaur cruft from the early days, including 16-bit and segment registers. They should have begun that transition a couple decades ago.
People who can't distinguish between etymology and entomology bug me in ways I cannot put into words.
(Score: 3, Interesting) by bzipitidoo on Tuesday December 10, @02:41PM (1 child)
I still find it hard to believe Intel is in such trouble, but they have failed in a lot of areas. Like, almost every area outside of x86. Their early integrated graphics, the i845 and similar stuff, was horribly slow, so much so that "integrated" became a byword for "suck" and "slow".
Even within the x86 architecture, they've had many misses. That's right, AMD beat them to 64bit x86. I am not sure why Itanium flopped, but recall that it was too high priced.
Legacy cruft is so spot on. In addition to the segmented memory garbage, there's the utter insanity in having made the floating point math a stack architecture, which goes against the load and execute model used for integer math, and is inherently poor. Why didn't they do it the same way as they did the integer math? Even setting aside the x87 issues, x86 itself is regarded as a poor architecture. They have been rather ingenious in making the arch perform anyway, for instance with the move towards RISC under the hood -- the Pentium is a RISC processor with an x86 skin. They added shadow registers to make up for the x86 lack in that area. Another bit of messed up cruft is the decimal arithmetic. It's useless. And I understand they were forced by patents to avoid the most straightforward implementation, coming up with a very kludgy way. So, useless and kludgy. Then, on the matter of stacks again, are the PUSH and POP instructions. And CALL and RET. Simply too many cases in which those instructions do too much, so that if they are used, some of the work they do has to be undone. One way they really waste effort is in the case of doing 2 or more in a row. A POP followed by a PUSH spends effort updating the stack pointer that ends up unchanged. 2 POPs or PUSHes in a row has to update the stack pointer twice. Would be so much better to defer that update until all the stack manipulation is done so the pointer can be updated just once. But the architecture cannot do that, has to update the stack pointer after every operation on the stack. So, just don't even use those instructions.
The whole reason to cling so hard to x86, cling to that backwards compatibility, is to ensure that really old binaries will still work. Except that there've been so many changes that a) they really don't, and b) if they did work, they'd be uselessly antiquated anyway. All that effort for that, and why? Apparently so that customers won't be screwed by the commercial software world never opening up any source ever, which assumes they never have to worry about competition from open source. If anyone does want to run a really old binary, the custom these days is to emulate the entire environment with things such as DOSBox. The GNU project is almost as old as the x86 architecture, but it seems Intel never heeded the memo about GNU. Today, building binaries for any modern architecture is no big deal, provided that the arch isn't lacking in certain crucial functionality that the x86 arch lacked until the 80486.
(Score: 3, Interesting) by DannyB on Tuesday December 10, @03:12PM
That backward compatibility with a huge software base is the entire value proposition for both Intel and Microsoft. If they innovate and move beyond the backward compatibility they have nothing and cannot compete with the more nimble innovators.
Intel can't make x86/64 into RISC.
Microsoft can't make Windows into Linux, much as they might try.
The things that made them rich are the seeds of their destruction as the world changes. This may even turn out to be true with Google. Google still must have a search engine, but then there is AI beckoning.
People who can't distinguish between etymology and entomology bug me in ways I cannot put into words.
(Score: 5, Interesting) by deimios on Tuesday December 10, @05:59AM
Intel like most global corps have massive inertia. If you try to steer them in another direction it takes YEARS before you see the result.
For example they just released their 2nd series of GPUs (ARC Battlemage) but in an interview they said that the next series (ARC Celestial) is already finalized and that it takes 2 years from finalizing the design to release.
How did they think turning around a ship this slow would go?
Now I have no inside info and probably their beef with Pat is of another nature, but if it's only about profitability, that will take time.
I have no love for intel but competition is sorely needed in the chip space.