http://www.righto.com/2023/02/silicon-reverse-engineering-intel-8086.html
Status flags are a key part of most processors, indicating if an arithmetic result is negative, zero, or has a carry, for instance. In this post, I take a close look at the flag circuitry in the Intel 8086 processor (1978), the chip that launched the PC revolution.1 Looking at the silicon die of the 8086 reveals how its flags are implemented. The 8086's flag circuitry is surprisingly complicated, full of corner cases and special handling. Moreover, I found an undocumented zero register that is used by the microcode.
This discussion was created by hubie (1068) for logged-in users only, but now has been archived.
No new comments can be posted.
Silicon Reverse-Engineering: The Intel 8086 Processor's Flag Circuitry
|
Log In/Create an Account
| Top
| 4 comments
| Search Discussion
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
(1)
(Score: 2) by Snotnose on Monday February 13, @01:24AM (1 child)
Doesn't mean we don't like reading these. I cut my teeth on 8086 assembly and find these very interesting.
Let us now take a minute to swear at Fox Streaming for the way they are incredibly fucking up the Superb Owl.
I just passed a drug test. My dealer has some explaining to do.
(Score: 0) by Anonymous Coward on Monday February 13, @03:48AM
Wow. Somebody else solved that crossword clue too.
(Score: 2) by timbim on Monday February 13, @12:04PM
That was really interesting. I'm happy this information exists.
(Score: 2) by bzipitidoo on Monday February 13, @03:37PM
Surprisingly complicated isn't good. But there were good reasons for it. Being first to market with something that's good enough is critically important.
The processor is a product of "cowboy engineering", in which we have bad or no reasons for why a particular design was chosen. The emphasis was on shoving something workable out the door as soon as possible, and not spend any more time than necessary on the design. One of the more well known cases was division, and I don't mean only the infamous Pentium division bug. Div as implemented in the 8086 was extremely slow. Where possible, you always want to use shift right, not div. Indeed, the implementation of floating point division in the Pentium was to address that slowness by using a clever iterative method rather than something based on grade school division methods. They had carefully calculated a set of starting values that guaranteed a correct result in, IIRC, just 4 iterations no matter what values were being divided. It would have worked perfectly, and did work perfectly in later Pentiums with the correctly implemented design. In the first Pentiums a few of those starting values were corrupted, so in a few cases the iteration was given a bad start, and thus was able to get answers that were close but not perfect.
It was the same way for the PC itself. IBM hastily threw together a design from off-the-shelf components. Some of the infamous expediences that made sense at that time, and for a short while, have dogged computing ever since. For instance, limiting the number of "primary" partitions of a hard drive to just 4. 4! Time and time again, the PC's design had to be modified to support more-- more memory, more storage space, more speed, more bits. Some of that was limitations inherent to 16-bit computing, but others, you wonder what the heck were they thinking? Then there's the hash they made of expansion card support. That cards had to be probed was terrible. In the early days, one wrong probe could lock up the system. Then the user, or more like the IT pro, had to configure the software to not make whatever probe caused the lock up. There's a reason the best of the PC clones always had reset buttons. Also, when building a PC or just adding a card, had to slide jumpers on and off printed circuit boards to set IRQs and the like.