As we’ve seen in the previous post in this series about the future of the computer, Moore’s law predicts that every other year, the number of transistors on a chip doubles. So what? What are we going to do with all those transistors?
In the 90s, the life of a chip designer was rather straightforward: we’ll do the same, but faster. The industry had just emerged from the period of the great RISC-CISC war that was basically a war between Intel and its competitors. The machine language of a CISC processor had lots of instructions, but each instruction took a long time to execute. In a RISC processor, the number of instructions was limited but each was executed in exactly 1 clock cycle. If we simplify history a bit, the two designs approached each other in the 90s. RISC processors used the extra transistors to add more instructions (multiplications! floating point!). CISC processors used them to execute the existing instructions faster.
It’s an interesting case study to look at how Intel made their CISC x86 architecture more RISC-like. First, they built a layer under the assembly language: microcode. This wasn’t new: microcode is just a technique to simplify the design of CISC processors that’s been used since the 50s. Some microcode instructions are again split up in micro-instructions. The end result is a small set of basic instructions that are autonomous and independent.
Each instruction consists of a number of steps: fetching the instruction, fetching the data, executing the instruction and storing the data. Now we have split all of this up, we can execute it in a pipeline that works just like a conveyor belt in a factory. The 90s were the heyday of pipeline design: regular optimisations, super pipelining, superscalar (with >1 pipeline in parallel), … Each instruction may take a long time, but because the pipeline was processing several instructions at once, the effective processing speed was much higher.
The future looked great: thanks to Moore’s law, we could cram more transistors on a chip and use those to do the same, but faster. Life was good for processor manufacturers in the 90s. But in 2004, the whole industry hit a brick wall. The party was over. Intel was about to launch its latest and greatest Arnold-Schwarzenneger-Terminator of a CISC CPU. But the Tejas and Jayhawk architectures were shelved. It turned out Moore’s law by itself wasn’t sufficient to build faster processors. The law had a little twin brother: Dennard scaling. And that law had just passed its best-by date in 2004.
More about Dennard & the consequences in the next post!