Category Archives: FutureComputingEN

This is a series about the future of computing.

The end of Moore’s Law

Zilog Z80A processor
Z80A processor, the one I used first in my Sinclair ZX81

As the first post in this series about the future of the computer we look at the basic hardware component: the processor. The development of processors (and chips in general) follows about the most well-known law in computers: Moore’s law. Gordon Moore was co-founder of Intel when he noticed the number of transistors they put on a chip doubled every other year. Since he noticed this in 1965 and 50 years have passed since then, processors nowadays have 2 25 (33,5 million) times more transistors than back then.

Historically, this law is extremely accurate. It almost feels like it’s a self-fulfilling prophecy and that manufacturers aim to reach it. One very explicit example is Intel with its tick-tock development. Every other year, Intel starts using another production process. This works really well for them: when they started tick-tock in 2006, AMD was a tough competitor. A few ticks and tocks later, Intel was both the technological as well as the market leader.

Intel announces new processors yearly. The first step, in 2006, was a tick: the existing architecture was put on another process, which was 65nm back then. The next year, the process stays the same and Intel improves the processor architecture. This keeps a clean division between the process and architectural improvements. Intel is able to avoid typical problems in the production of new processors this way. When another manufacturer updates both its process as well as its architecture at the same time, it can be hard to figure out if a bug is due to the process or the architecture. For example, TI’s TMS320C62 DSP processors experienced serious delays because of this.

When Intel ticks, they shrink the process with a factor of around √2. So on the same surface, they exactly double the number of transistors every other year.Self-fulfilling prophecy!

FinFET vergroot het contactoppervlak van gates.
FinFET is één manier om het contactoppervlak van gates te vergroten bij het verkleinen van het productieproces.

The production of smaller components has its own challenges. For example, the contact surface of the transistor gates becomes smaller, which makes it harder for enough electrons to pass. To solve that, processors with 25-20nm processes or smaller have gates that have a “fin” standing upright. The gate is wrapped over it, giving a larger contact surface. This kind of technique is an impressive workaround, but physically there is a hard limit on how small we can build transistors. And that limit is easy to calculate.

The most recent Intel Broadwell processors use a 14nm process. Graphene, a sheet of carbon molecules exactly one atom thick, is 0,345 nm thick (and silicon is a larger atom than carbon). Even if we assume that Intel succeeds in shrinking every other year by a factor of √2, then after 10 ticks the process is smaller than one atom. We can safely state that the end of an era is near.

In the next posts, I will cover other aspects of Moore’s Law: heat, production costs, and more. Did you hear about a material that can replace silicon? Or do you know how processors can evolve after Moore’s Law? Let me know in the comments below!

The past of the computer

Guess which OS this is!
A bit older than 2005… Which computer is this?

In this first post in the series about the future of the computer, let’s first look back. I want to show how short-sighted we are when we look forward. Looking back, we see obvious trends like smartphones and cloud computing. But we can’t just guess those by extrapolating our knowledge of the present towards the future: it doesn’t work for the stock market and it doesn’t work for technology.

Suppose we looked forward 10 years ago. The trends back then were: faster PCs and faster Internet. Concretely:

  • The processor war was about to start again. AMD launched the first 64 bit x86 processor in 2003: the K8 architecture, sold as the Opteron and the Athlon64. After that, they were first again with dual core server processors. Intel kept itself busy mainly with lawsuits. But in 2006, the giant awoke. They announced the Core architecture and started the incredibly successful tick-tock development.
  • Belgacom increased its ADSL speeds to nearly 5mbit/s. They had about a million customers. Telenet was coming up strong and owned around 25% of the residential market with a customer base of around 600000.
  • Mobile computing was very limited. Psion had opened Symbian to a consortium of GSM manufacturers like Nokia and Ericsson. Windows Mobile was strong in the US and the BlackBerry slowly started its victory march. It was two years before the iPhone was announced.

In 2005, it didn’t look like the smartphone had a good future. Mail worked okay: it’s what made the BlackBerry popular. But browsing wasn’t very enjoyable. Until 2004, Windows Mobile only supported a QVGA resolution (240 x 320). The state of apps was just as miserable. Even Apple didn’t have an app store in iOS 1.0. If you wanted to install an app, you had to google it, download it and jailbreak your iPhone.

The future, it seemed, belonged to the laptop. Indeed, in 2007 Intel considered mobile processors powerful enough to base a new type of small laptops around them: the netbooks. These had Wi-Fi and sometimes 3G as connectivity. In 2005, nobody saw it coming that mobile computing would be thoroughly revolutionised by smartphones and apps. And then I haven’t even mentioned tablets.

Today, it looks obvious that the future belongs to the smartphones and their apps. But that doesn’t mean that’s what the future will look for. What it means is that we have gotten used to smartphones and apps. Just like Henry Ford’s “faster horses”, we mostly expect more of the same, only better. Chances are slim that that’s the only thing we will be getting. That’s why I’m looking at a few trends that may be worrying or controversial.

Will the future look exactly as I say it will? Who knows. But I expect to recognise a few elements.

Which trends do you see under the surface at this moment? Will they determine the future of the computer? Let me know in the comments below!

The future of the computer

Liever Nederlands?

In 10 or 20 years, computers will be very different from what they are now. Possibly their shape, too, but it’s the underlying technologies that faces a number of revolutions. And those will be as fundamental as the rise of home computers, PCs, Internet, cloud computing, smartphones, and so on. This post starts a series about the future of the computer, seen from a technological point of view.

Every so often, we can read in the media how differently we will work with our computers in the future. From self driving suitcases with speech recognition to strapping a smartphone to your face for virtual reality, the user interface guarantees a never ending feed of hilarious “news”. Let’s dig under this theatrical surface and look at what happens there, in the beating heart of what a computer is and how it gets us the information we asked for.

I see three areas in which computers will change fundamentally:

  • “Free compute” and the end of Moore’s law. We have grown so used to the exponential growth of computers that we have started to assume this is normal. On the one hand, Moore will continue to have an impact for a good while to come: not everything we use now is built on the latest process. On the other hand, we are reaching the limits of what is physically possible.
  • Internet of Things. As the slogan says: the digitalisation of the world has only just begun. Ideas are partially digitised. What does the world look like if we digitise the material side?
  • Distributed applications. Some of our applications are currently decentralised, mostly to make them more resilient against disasters. Blockchain technology allows completely distributed applications that don’t need central management.

In the coming months, I will expand into each of these three areas on this blog.