Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday March 11, @12:18PM   Printer-friendly

https://www.righto.com/2023/02/how-8086-processor-determines-length-of.html

The Intel 8086 processor (1978) has a complicated instruction set with instructions ranging from one to six bytes long. This raises the question of how the processor knows the length of an instruction.1 The answer is that the 8086 uses an interesting combination of lookup ROMs and microcode to determine how many bytes to use for an instruction. In brief, the ROMs perform enough decoding to figure out if it needs one byte or two. After that, the microcode simply consumes instruction bytes as it needs them. Thus, nothing in the chip explicitly "knows" the length of an instruction. This blog post describes this process in more detail.

[...] The 8086 uses a 6-byte instruction prefetch queue to hold instructions, and this queue will play an important role in this discussion.3 Earlier microprocessors read instructions from memory as they were needed, which could cause the CPU to wait on memory. The 8086, instead, read instructions from memory before they were needed, storing them in the instruction prefetch queue. (You can think of this as a primitive instruction cache.) To execute an instruction, the 8086 took bytes out of the queue one at a time. If the queue ran empty, the processor waited until more instruction bytes were fetched from memory into the queue.


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by maxwell demon on Saturday March 11, @06:04PM (1 child)

    by maxwell demon (1608) Subscriber Badge on Saturday March 11, @06:04PM (#1295688) Journal

    One thing to note is that for the time, it was AFAICT a totally reasonable design. Not only were later developments in processor design not known yet, I also bet none of those who developed the processor expected essentially the same instruction set to still be in use almost half a century later.

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 4, Informative) by stormwyrm on Saturday March 11, @07:31PM

      by stormwyrm (717) on Saturday March 11, @07:31PM (#1295697) Journal

      Semiconductor memory was only beginning to take over from magnetic core memory in the late 1970s: it was getting cheaper but still not cheap enough that one could get away with using a simpler or fixed instruction length as that would necessarily increase memory usage. If you mandated, say, a fixed 16-bit fixed instruction length, then you would need at least two instructions to load an immediate 16-bit value into a register, four bytes instead of three. It would be difficult to do a jump to even a fixed 16-bit address within a memory segment: that would need to be either an indirect jump loaded into a register (6 bytes, four bytes to do a 16-bit load, and two more bytes for the jump, instead of just 3) or two separate instructions for a dedicated jump destination register. Common instructions like increment/decrement would also take two bytes. A fixed instruction bit length would also preclude the complex addressing modes that are common on CISC architectures. Saving a few bytes here and there could literally mean the difference between a workable, cost-effective system and a non-starter. More rational and orthogonal architectural instruction set design choices had to give way to kludges that worked within the limitations of other essential technology as it existed at the time.

      I'd noticed this kind of binary code size difference back in the early 1990s when I was first exposed to RISC architectures that used fixed instruction lengths. I remember that the executables on Sun workstations were substantially larger than the same ones on the early Linux systems of the time: for example the same version of GCC had an executable on SPARC Solaris that was maybe 1.5 to twice the size of the same one on 32-bit x86 Linux. This was already becoming less of an issue in the 1990s but in the 1970s when the x86 architecture was invented, memory, especially RAM, was still hellishly expensive.

      --
      Numquam ponenda est pluralitas sine necessitate.
  • (Score: 3, Interesting) by kazzie on Sunday March 12, @12:08PM

    by kazzie (5309) Subscriber Badge on Sunday March 12, @12:08PM (#1295761)

    With a modern head on, I've looked at fixed-length instructions as being a far saner approach, as I didn't want to try to wrap my head around implementing identifying instructions of length 1-to-N. (Granted, memory wasn't as freely available back then, but this is a view in hindsight.)

    For some reason, it never occurred to me to think of it as a done/not-done binary decision, and keep on fetching until the executed microcode flagged that the instruction was finished.

(1)