This is the sixth video in Part 1 of the Performance-Aware Programming series. Please see the Table of Contents to quickly navigate through the rest of the course as it is updated weekly. The homework files (listings 43, 44, and optionally 45) are available on the github.
Now that we understand how instructions are encoded on the 8086, believe it or not, we implicitly understand how instructions are encoded pretty much everywhere. There simply isn't that much variability in how instructions sets are encoded, and 8086 is a fairly complicated instruction set, so if you can decode it, you probably won’t be surprised by most modern instruction set encodings.
If you were to fast forward to today and look at how x64 instructions are encoded — which of course, we’ll do later in the course — they would look shockingly similar to you. You’ll notice a few differences, but you would immediately “get it”. And if you looked at simpler encodings like RISC-V or ARM (although ARM is getting pretty complicated too these days!), you would probably find them even easier to decode than 8086.
We don’t need to spend any more time looking at the specifics of decoding. No matter what the CPU is, we could apply what we’ve already learned about decoding to understand its instruction format. We just find the architecture manual, look at the bit patterns, and do exactly the same procedure we did for 8086.
That’s all there is to it. There’s no more mystery. That's how all of the code that is running everywhere, on everything, is represented!
Now it's time to demystify the second part, which is once the CPU decodes an instruction, what does it actually do? What are these operations that we have been decoding, and how do they combine to form actual programs like the ones we write in higher-level languages?