Computing Power Can Keep Growing as Moore’s Law Winds Down. Here’s How


Moore’s Law is faltering, but that doesn’t mean the end of progress in processing power. R ather than relying on semiconductor physics and silicon-fabrication technology, though, we need to turn to innovations in software, algorithms, and hardware, says a group of leading experts.
Despite powering exponential growth in computing power for nearly half a century, the miniaturization of transistors is about to hit fundamental buffers. Not only do the physics behind these performance gains start to change as we approach atomic levels, but the costs quickly become prohibitive.
A host of alternatives to silicon are waiting in the wings, such as photonics, carbon nanotube transistors , and superconducting circuits, but none are close to being able to replace the technology we’ve become so reliant on. But according to the authors of a new paper in Science , all is not lost, because there is huge scope for performance gains in software, algorithms, and chip architectures.
While Moore’s Law was doubling chip performance every year or two it was easy to get lazy, the authors say. Because regular boosts in speed were guaranteed, software development focused on cutting the time it took to develop an application rather than the time it takes to run that application. Similarly, there was little incentive to develop a chip specialized to a particular task when you knew that in a couple of years a general-purpose chip would outperform it.
Now that those easy gains have gone, though, the authors say it’s going to be increasingly important to refocus computer scientists’ efforts on optimizing all elements of the computing stack for performance. I t will be hard , and the gains will be uneven and sporadic .
But they also highlight the kind of headroom there is for improvement. In a simple example of a piece of code for matrix multiplication, they show that just switching from the coding language Python to C can generate a 50x speedup, and then optimizing the code to take advantage of chip hardware features provides a 60,000x boost.
For software, many of the gains will come from better tailoring code to take advantage of hardware features; for instance, exploiting the multiple cores on modern chips by running many operations in parallel rather than sequentially. Most modern software also features huge amounts of bloat due to lazily re-purposing code for things it wasn’t built for.
Unfortunately, the authors note that high-performance code tends to be much more complicated to write than slow code, so there needs to be better training for programmers and more productivity tools to assist them.
At the level of algorithms, the path to improvements is less clear, because breakthrough s tend to rely on human ingenuity and tend to be specific to a particular problem. But despite their unpredictability, the potential...

Top Tech News


Top