https://www.righto.com/2023/02/8086-modrm-addressing.html
One interesting aspect of a computer's instruction set is its addressing modes, how the computer determines the address for a memory access. The Intel 8086 (1978) used the ModR/M byte, a special byte following the opcode, to select the addressing mode.1 The ModR/M byte has persisted into the modern x86 architecture, so it's interesting to look at its roots and original implementation.
In this post, I look at the hardware and microcode in the 8086 that implements ModR/M2 and how the 8086 designers fit multiple addressing modes into the 8086's limited microcode ROM. One technique was a hybrid approach that combined generic microcode with hardware logic that filled in the details for a particular instruction. A second technique was modular microcode, with subroutines for various parts of the task.
I've been reverse-engineering the 8086 starting with the silicon die. The die photo below shows the chip under a microscope. The metal layer on top of the chip is visible, with the silicon and polysilicon mostly hidden underneath. Around the edges of the die, bond wires connect pads to the chip's 40 external pins. I've labeled the key functional blocks; the ones that are important to this discussion are darker and will be discussed in detail below. Architecturally, the chip is partitioned into a Bus Interface Unit (BIU) at the top and an Execution Unit (EU) below. The BIU handles bus and memory activity as well as instruction prefetching, while the Execution Unit (EU) executes instructions and microcode. Both units play important roles in memory addressing.
(Score: 4, Interesting) by stormwyrm on Wednesday March 08, @06:59AM (1 child)
I've been following Ken Shiriff's article series on reverse engineering the 8086 ever since they were mentioned here and they've been fascinating. One thing I have been struck by is how much the internal microcode sometimes resembles a RISC processor's instruction set: fixed-size instructions, load/store architecture, etc. Perhaps that is one way that one can think of RISC processors, as CISC processors with what amounts to programmable microcode in cache RAM.
I wonder what the x86 string instructions (LODSB, LODSW, STOSB, STOSW, MOVSB, MOVSW) look like under the hood. Those are the sorts of instructions that most clearly make the x86 architecture a CISC.
Numquam ponenda est pluralitas sine necessitate.
(Score: 3, Interesting) by Unixnut on Thursday March 09, @01:17PM
Yes, quite early on it was realised that RISC would scale to higher clock speeds much better than CISC. However there was a bunch of legacy that still relied on CISC, and breaking backwards compatibility would not be very successful from a business perspective.
So the plan was to effectively design a RISC processor, with a front end translator for CISC instructions, which is what we see now with the 8086 family. As development progressed we even got patchable microcode, so you can alter the internal CPU instructions (however these were never exposed in such a way as to allow compilers to optimise for them, it is internal only).
Likewise I agree with you on the topic. Fascinating work, and these kind of posts remind me of the early green site days, when we would get actual nerdy news. Shame the comments for such topics are still very very low though.