Intel Publishes "X86-S" Specification For 64-bit Only Architecture
Intel quietly released a new whitepaper and specification for their proposal on "X86-S" as a 64-bit only x86 architecture. If their plans workout, in the years ahead we could see a revised 64-bit only x86 architecture.
Entitled "Envisioning a Simplified Intel Architecture", Intel engineers lay the case for a 64-bit mode-only architecture. Intel is still said to be investigating the 64-bit mode-only architecture that they also refer to as "x86S". Intel is hoping to solicit industry feedback while they continue to explore a 64-bit mode only ISA.
[...] Under this proposal, those wanting to run legacy 32-bit operating systems would have to rely on virtualization. To further clarify, 32-bit x86 user-space software would continue to work on modern 64-bit operating systems with X86-S.
Also at Tom's Hardware.
(Score: 4, Funny) by VLM on Monday May 22 2023, @12:08PM (6 children)
I realize "news" is just propaganda run thru chatGPT so can't expect much anymore. However:
OK so they have a 64-only design for the future, cool. Ditch legacy 8080 binary compatibility, sad to see it go but whatever.
Wait, aren't we seeing one now? See above. Literally the previous line was they just released an architecture plan for 64-bit only.
OK see above no running 32 bit virtualization, just so we're all on the same page. Ditto no more running in 16 bit addressing modes, if its 64 bit only. I'll miss setting segment registers like its still 1984.
Wait, what? They'll run in 32-bit mode on a 64-bit mode only system? That'll work well.
I think the chatGPT bot is thinking software emulation but writing virtualization. Sure, you can emulate Z80 and 6502 binary opcodes on a 64-bit proc or any Turing complete processor, technically.
OK so no need to emulate or virtualize, just run 32-bit addressing mode software on a 64-bit-addressing-mode-only processor. Kinda like uploading your legacy PIC12 binaries to a STM32 cpu, what could possibly go wrong?
The meta problem is the linked story above shows how you can, in the short term, replace authors by a very small shell script calling chatGPT. But the readers, and later the advertisers, will just abandon them. The word salad doesn't mean anything, its been too heavily chat botted and run thru the journalist filter to mean anything.
(Score: 4, Insightful) by HiThere on Monday May 22 2023, @01:06PM (1 child)
That could be ChatGPT, or it could just be a reporter/editor with no idea of the subject matter. It's a bit worse than usual, but not THAT much worse.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 3, Funny) by JoeMerchant on Monday May 22 2023, @02:26PM
>It's a bit worse than usual, but not THAT much worse.
Independent bloggers, lowering the bar until AI can do it better...
🌻🌻 [google.com]
(Score: 1, Interesting) by Anonymous Coward on Monday May 22 2023, @01:58PM
Actual 8086 binary compatibility hasn't even really been a thing on modern Intel CPUs anyway, since Intel completely removed the A20 gate in Haswell (ca. 2013).
(Score: 1) by shrewdsheep on Monday May 22 2023, @02:03PM
I interpret this rather as abolishment of real mode (TLDR) rather than anything else. Terribly written indeed.
(Score: 3, Informative) by DannyB on Monday May 22 2023, @02:53PM
If I rememberize correctfully from BYTE magazine and living in the late 1970s . . .
I think you mean legacy 8080 source compatibility.
The stupid segment registers of 8088/86 that were a thorn in the side of PCs for decades, were to have assembly language source code compatibility (but not binary compatibility) with 8080 if you could preset the segment registers correctly. What I would call a Pyrrhic victory.
Satin worshipers are obsessed with high thread counts because they have so many daemons.
(Score: 2) by Immerman on Tuesday May 23 2023, @01:23PM
No, read your own quotes again. Architectures *exist*. We're now seeing the *proposal* for the architecture - a proposal is not the thing. It's not even the plan for the thing - at best it's a draft of a plan for the thing.
Like, if your girlfriend came in and proposed making lasagna for dinner, you wouldn't have lasagna. You wouldn't have even the recipe for the lasagna. All you would have is the proposal.
(Score: 1, Insightful) by Anonymous Coward on Monday May 22 2023, @12:23PM (13 children)
Just when the news was starting to look like Intel was snapping out of their malaise(?), along comes this, freezing them to 64 bit. They should be looking to the (distant?) future, proposing architecture for 128 bit computing!
Lots of previous systems have been poking at partial 128 bit architecture for decades, https://en.wikipedia.org/wiki/128-bit_computing [wikipedia.org]
(Score: 4, Interesting) by VLM on Monday May 22 2023, @12:41PM (2 children)
The advantage of a 128 bit data bus isn't the likely wider address bus but how cool it would be to move quad-precision floats in one move.
If you go 256 bit data bus that provides enough decimals of precision that 'lots' of float applications can be replaced by faster fixed point int. Some 256 bit GPUs did/do that.
512 bit data path would get you roughly 150 decimal fixed points of precision much faster than floating point. I remember later (or higher number LOL) GTX 200 series GPUs had 512 bit memory busses.
Its interesting as GPUs get wider, the smallest data bus in the average PC is probably the keyboard controller, but the second smallest is likely the main CPU.
Ironically given the topic of the story, Intel's AVX-512 extensions are one decade old this year, so if you had a mid 2010s Skylake Intel processor, you had a partially 512 bit CPU...
(Score: 1) by shrewdsheep on Monday May 22 2023, @01:23PM (1 child)
I am wondering about performance difference between fixed/floating. My understanding is that floating point is fully pipelined, i.e. there is a 1 IPC throughput. Is this correct? How many pipeline stages are there for floating point? How many for fixed?
(Score: 2) by VLM on Monday May 22 2023, @04:06PM
The clock time is always faster for fixed, its just less complex than float.
(Score: 3, Informative) by takyon on Monday May 22 2023, @01:05PM
They dropped AVX-512, though they didn't want to. I could have sworn I heard something about 1024...
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by Mojibake Tengu on Monday May 22 2023, @02:14PM (8 children)
There is nothing insightful in the parent.
Trivially, 128bit logic/arithmetic is common in current 64bit architectures already, as was in some historical machines, while true 128bit memory addressing is still thousands, if not millions of years of necessary technology advancement before us.
I am well aware I will need a bigger Universe just for holding such a computer.
Reality check: please note all current implementations of hardware memory addressing schemes are actually 48bit.
Rust programming language offends both my Intelligence and my Spirit.
(Score: 2) by DannyB on Monday May 22 2023, @03:51PM (7 children)
Even if hardware memory addressing remains at 48 bits for the forseeable life of the universe, I can see many stupidly good reasons to have 128 bit addressing at all levels above the hardware memory addressing.
1. Be able to address every byte of available block storage (. . . be patient, I'll think of a reason why this might somehow be useful to someone somewhere in some obscure use case . . .)
2. Be able to inefficiently address every byte of memory in a cluster of PCs (something like beowulf, but much more gooder). That has to a be a lot better than plan 9 where everything on local network systems is addressable as a pathname as if it were local.
As the subject line sez, it may be 128 for the win, but we don't really need 128 bit addressing for Linux do we?
Satin worshipers are obsessed with high thread counts because they have so many daemons.
(Score: 3, Interesting) by maxwell demon on Monday May 22 2023, @07:47PM (1 child)
We could give every dynamic library its own 64 bit address range and do away with dynamic address calculations. Calling a dynamic library function then would be no more complicated than calling a static one.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by DannyB on Monday May 22 2023, @09:46PM
There sure would be a lot more space for address layout randomization.
Satin worshipers are obsessed with high thread counts because they have so many daemons.
(Score: 1, Insightful) by shrewdsheep on Tuesday May 23 2023, @07:32AM
3. Be able to run Java programs beyond "Hello World" at long last...
(Score: 2) by Immerman on Tuesday May 23 2023, @01:39PM (3 children)
Well, I agree with the stupid part...
64 bit addressing already lets you address every individual byte of roughly 20,000,000 terabytes. I can't think of any reason any consumer hardware (or software) could possibly benefit from addressing that.
I mean, *maybe* that's not enough to address Google's entire data center storage archive.... but for anything else? What, you want to be able to address every byte of every computer on the planet? There's not even any rational way to map that onto a linear address space.
(Score: 2) by DannyB on Tuesday May 23 2023, @01:47PM (2 children)
Simple solution, obvious to any Java programmer. Use an even larger address space than 64 bit. Divide it into subsets where each subset has a different organization in how all of the intergalactic information is ordered.
Satin worshipers are obsessed with high thread counts because they have so many daemons.
(Score: 2) by Immerman on Tuesday May 23 2023, @01:53PM (1 child)
Hmm... nope. I'm not seeing how you got from "rational" to "Java Programmer". };-D
(Score: 2) by DannyB on Tuesday May 23 2023, @03:30PM
Is there some path from 'rational' to 'Java programmer' ?
Satin worshipers are obsessed with high thread counts because they have so many daemons.
(Score: 2, Interesting) by pTamok on Monday May 22 2023, @01:41PM (2 children)
Does this mean that they are also proposing no longer starting a boot in 16-bit real mode? That is, it's 64-bit all the way?
AFAIK all Intel x86 processors start (execute Platform Initialization) in 16-bit real mode, including UEFI-only systems. There's some funky stuff that happens with handover from the Management Engine to the main processor right at the beginning of boot/power on.
https://stackoverflow.com/questions/58651734/do-modern-computers-boot-in-real-mode-or-virtual-real-mode [stackoverflow.com]
https://wiki.osdev.org/Real_Mode [osdev.org]
https://hackaday.com/2017/10/23/write-your-own-x86-bootloader/ [hackaday.com]
(Score: 2) by Immerman on Tuesday May 23 2023, @02:07PM (1 child)
That's how I read the article. Sounds like they'll still (possibly? Not 100% clear.) have native support for 32 bit instructions, the proposal is simply to eliminate all the pre-64-bit modes from the boot sequence.
Which seems reasonable to me - trimming away such long-obsolete support is a necessary part of architecture maintenance. Vanishingly few people still run 16-bit, or even 32-bit OSes, and most of those do so on either native hardware or within emulators.
Of course, I assume modern OSes would have to be modified to recognize the new hardware and not try to jump through a bunch of nonexistent hoops when booting... but I suspect most 64-bit OSes are still actively supported, and it should be a pretty minor update. So an excellent time to trim away such deadwood.
(Score: 1) by pTamok on Tuesday May 23 2023, @02:31PM
We'd need to see the deep technical details.
I suspect that, at the very least, the startup would have to be in real mode, for security reasons, to pick up the handover from the Management Engine, unless the ME gets some expanded capabilities which become difficult to audit. So the startup could be in real 64-bit mode, which has its own challenges. From a security point-of-view, there are benefits to starting off with a simple processor in a constrained (read 'well defined') environment, and building up from that. The more complicated the startup configuration (both software and hardware), the more room for bugs, vulnerabilities, and architectural oopsies.
(Score: 2) by DannyB on Monday May 22 2023, @03:57PM (1 child)
Is this sufficiently different, especially[1] at boot time, that a different kernel binary is needed at boot time?
Is it possible to have a 64 bit kernel binary for Intel that new firmware could launch into, possibly at multiple entry points depending on which type of processor is installed?
I assume that firmware could be rewritten to start in pure 64 bit mode. What must happen to hand off control to a loaded kernel (for any OS)?
Inquiring minds want to know.
-=-=-=-
[1]for those who don't know better, there is no X in eXpecially or eXcape
Satin worshipers are obsessed with high thread counts because they have so many daemons.
(Score: 2) by Immerman on Tuesday May 23 2023, @02:15PM
I suspect so. But I suspect it won't be all that big a change. You just need to eliminate the initial part of the boot sequence where the OS jumps through a bunch of hoops to get into 64-bit mode.
I would *assume* that there's an easy mechanism in the proposal for "universal" binaries to recognize whether they're on x64 or x86s hardware, and just skip the hoop-jumping part of the boot sequence for the latter.
I don't know the intricacies of the Linux boot process, but I suspect it should otherwise be mostly unaffected - if you want stuff to be available to a fully-booted 64-bit OS, then you probably need to already be in 64-bit mode when it's first loaded into memory, so I would assume transitioning to 64-bit mode is one of the very first things it does.
(Score: 2) by istartedi on Monday May 22 2023, @04:01PM (6 children)
As a stupid C programmer, what does this mean? Let's say I write a simple program to rot-13 some text. I iterate an array of bytes that represent ASCII.
Will such a CPU load the byte in to a 64-bit register, add or subtract 13, and then throw away most of the register when writing the result back?
Or, are the look-ahead pipelines and speculative execution I've heard about in these CPUs smart enough to see that the next instruction is doing the same thing, so it can do 4 at a time (not 8 because of carry), and write 4 at a time modulo the end of the array?
If the CPU isn't smart enough to do that, is a modern C compiler smart enough to merge such a naive byte iterator in to the proper sequence of 64-bit add/write instructions?
This is the kind of stuff that sits in the back of my mind, knowing that hardcore optimization is mostly a thing of the past and/or unnecessary because premature optimization is bad, and most software doesn't live long enough to require optimization, and algorithm choice trumps optmization, etc. but I'd still like to know.
What are the practical implications?
Appended to the end of comments you post. Max: 120 chars.
(Score: 3, Informative) by maxwell demon on Monday May 22 2023, @07:50PM (5 children)
The practical implications are that you no longer will be able to run 32 bit operating systems.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 3, Interesting) by SomeGuy on Monday May 22 2023, @08:21PM (2 children)
Given how locked down and lobotomized modern computers are, they are all going in that direction even if the CPU supports it or not. Seriously, how many even still have IBM PC BIOS compatibility? Microsoft already dropped the 32-bit OS version in Windows 11, and soon won't support Windows 10 32-bit. The nanosecond support ends, vendors will all magically delete their 32-bit drivers. Thanks to secure (money) boot, soon nothing will boot earlier media. If you haven't tried a 32-bit Linux lately, the few that are left are bloated pigs.
Personally, I hate not being able to run 32-bit OSes if I want to. But I'm the only one left on this planet that cares.
(Score: 2) by turgid on Monday May 22 2023, @08:58PM (1 child)
There are a lot of very important industrial embedded systems that are 32-bit and require 32-bit operating systems.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by Immerman on Tuesday May 23 2023, @02:18PM
Their clear subtext was "...on post-modern PCs that don't exist yet"
Existing embedded systems aren't going to be affected in the slightest. Nor will future embedded systems that continue to use 32-bit processors.
(Score: 2) by DannyB on Tuesday May 23 2023, @01:51PM (1 child)
There are always virtual machines and hypervisors.
Maybe some clever motherboard firmware could have an emulator that allows booting a 32 bit OS in emulation. Maybe several at once. Why not emulate other processors such as ARM and RISC V.
Satin worshipers are obsessed with high thread counts because they have so many daemons.
(Score: 2) by Immerman on Tuesday May 23 2023, @02:22PM
And emulators like PCem for greater compatibility.
Seems kind of silly to include an emulator in firmware... but hardly the silliest thing I've seen. Remind me again who thought it would be a good idea to let you boot into a firmware-based web browser that realistically never gets security updates?
(Score: 4, Informative) by turgid on Monday May 22 2023, @09:07PM (1 child)
I know you're not supposed to but I had a quick look at the linked article.
Obviously, for backwards compatibility it will still run 32-bit binaries in user-land. The way x86-64 was designed was rather simple and clever to allow this to work with no performance penalty.
I don't suppose many of us now will shed tears about not being able to run 16-bit applications in hardware. Software emulation is way more than adequate, and there really is no problem with performance.
I see that this new architecture supports five levels in the page tables. When AMD extended x86 to 64-bits they added an extra layer in the page tables (making it up to four). This is a logical progression from there, presumably to address a lot more physical memory.
Booting straight into 64-bit mode is quite sensible. However, from a programmers point of view, when a current x86-64 boots, don't you just need two or three instructions to go into 64-bit mode (one to go to 32-bits and then the one to go from 32 to 64)? I can't quite remember. It has been 20 years since I went to that talk.
It will make boot ROM code simpler and probably rid us of some ancient hardware limitations on the board.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by Immerman on Tuesday May 23 2023, @02:33PM
I believe it's pretty simple, but honestly I haven't messed with mode changes since writing DOS software on the 486.
The advantages aren't really in saving the OS writer effort changing modes, they're in saving the CPU designer the *much* greater effort of implementing those modes so that you could do things other than switch out of them. And there's probably some security gains to be had as well by just completely removing legacy modes that malware might exploit. Every bit of attack surface you can remove is a win - *especially* if that surface is something there's approximately zero legitimate users of, so that it doesn't normally get the maintenance attention it requires.
Presumably there's some silicon space saved as well - but I suspect you could tuck a full 32-bit CPU into the corner of a modern processor as an Easter egg and not appreciably increase the production cost.