Aardvark DailyNew Zealand's longest-running online daily news and commentary publication, now in its 21st year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.
Content copyright © 1995 - 2016 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk
Please visit the sponsor!
I watched a very interesting BBC Horizon documentary last night and it pointed out that we're reaching the limits of silicon technology for computer circuitry.
For some time, people have been predicting that Moore's Law would fail -- although those clever boffins who design stuff have been able to stay (just) ahead of the curve for the time being. However, it's likely to be just a few years before we hit the ceiling associated with traditional manufacturing techniques and clock speeds.
It's not that we can't design devices to switch more quickly -- it's just (as I've mentioned before in this column) that electricity is just too damned slow.
The reality is that the speed of light, which was once thought to be incredibly fast, is now actually incredibly slow.
Well light (and thus electricity) travels at about 300mm per nanosecond. That's just one foot (in the old money).
Likewise, a nanosecond was once thought to be an incredibly short period of time but these days it's an age. With modern CPUs clocked at 3GHz or more we're talking about clock periods that are just 300 picoseconds in length. Light/electricity can only travel 100mm in that timeframe -- and that's the maximum speed. When traveling through anything other than a perfect vacuum, they often travel much slower.
For example, in an electrical circuit, electricity can travel as slowly as just 60% the speed of light so now one clock cycle in our 3GHz CPU sees an electrical signal able to propagate just 60mm before the next clock cycle starts.
Now given that the dies we're dealing with are still fairly small (20mm x 20mm or so) and the length of the conductive traces still still in the sub-mm or just a few mm region you might think we'll be safe for a while to come.
The problem is that ever mm of conductor length will introduce a delay of 3.3 picoseconds and making sure that all the relevant data pulses reach the right point in die at the right time becomes an enormous task.
Add to this the fact that when a signal is translated, inverted or gated it is also subjected to a small delay -- the practicalities of producing large, complex, reliable processors using silicon is growing at a near-exponential rate as we approach the limits that physics impose on us.
So where to from here?
If we assume that the current design and fabrication techniques for silicon-based processors will hit their limits within a decade, how will we continue to grow our computing power at the rate we've become used to?
Perhaps a more important question is: do we need to keep increasing the power of our CPUs?
Isn't your desktop PC or your smartphone already fast enough for all practical purposes?
Maybe we'll see a period of time in the near future when processor power plateaus -- until we find a successor to silicon-based technology.
What will that successor be?
It's starting to look as if quantum computers may be the next logical progression -- since they are far less likely to be affected by the limits of physics and may even usurp those limits thanks to quirkiness of the quantum world.
I found this an interesting read and perhaps provides a glimmer light from the future.
Whatever the answer, I sure hope I'm around for long enough to see yet another revolution in computer technology (as happened back in the 1970s when microprocessors became a thing).
Do any readers dare to make a prediction as to the future of computing?
Will we just keep chipping away at the limits of current technology with ever-diminishing returns? Or will some revolutionary breakthrough (such as quantum computers) arrive and change the entire face of the industry?
Please visit the sponsor!
Have your say in the Aardvark Forums.