Thursday, 29 October 2009

Speed limit for computer processors - serial vs parallel

This news item from Nature about this PRL talks about how computer processors are going to hit a speed limit due to the speed at which a system can make transitions between quantum states.

There are two independent bounds on this minimum time — one based on the average energy of the quantum system, the other based on the uncertainty in the system's energy. In their calculations, Levitin and Toffoli unify the bounds and show there is an absolute limit to the number of operations that can be achieved per second by a computer system of a given energy.

I'm not an expert in quantum information so all I can say is that it looks interesting. There are implications for myself because most of my work is pretty intensive computer simulation. Some of what I do simply needs fast processors, there are sections of my Monte Carlo simulations that cannot be parallelised (fancy cluster algorithms being one). So for these, in principle, it limits what could ever be done.

However, mostly my limit is on what statistics I can collect and that can be solved by using more and more processors. The move from single core being standard, to eight these days, has been a revolution in terms of what I can now get done in a reasonable time scale.

In fact one very interesting development is using standard computer graphics cards to perform molecular dynamics (MD) simulations. I've only read the abstract of this paper I'm afraid but they've apparently done this. Graphics cards designed for games have many little processors on them (GPUs) and they can all work on the problem more efficiently than one super powered CPU trying to do it on its own.

So next time you say that computer games are a waste of time think of this...


  1. What Profs. Lev B. Levitin and Tommaso Toffoli's paper "Fundamental Limit on the Rate of Quantum Dynamics: The Unified Bound Is Tight" (Physical Review Letters, Vol. 103, Issue 16 [October 2009]; also at arXiv:0905.3417) demonstrates is that processor speed can diverge to infinity if the energy of the system diverges to infinity.

    Under the Margolus-Levitin theorem, the bound was given as t >= h/(4*E), with t being the minimum operation cycle in seconds, h being Planck's constant, and E being energy in joules. Levitin and Toffoli said paper tightens this bound to t >= h/(2*E) and generalizes it to all cases.

    With this new bound, one obtains ~ 3.31303448*10^-34 seconds as the minimum operation cycle per joule of energy; or for the reciprocal, a maximum of ~ 3.0183809*10^33 operations per second per joule of energy.

    So notice here that processor speed can increase without limit if the energy of the system is increased without limit. When the authors of the paper speak of a fundamental speed limit of computation, they are referring to per unit of energy.

    In the article "Computers Faster Only for 75 More Years: Physicists determine nature's limit to making faster processors" (Lauren Schenkman, Inside Science News Service, October 13, 2009), paper co-author Levitin is quoted as saying, "If we believe in Moore's law ... then it would take about 75 to 80 years to achieve this quantum limit." What Levin is referring to here is given the current energy-density of our present ordinary matter, processors cannot be made which have greater processing-density after around said time, i.e., one won't be able to fit more processing power within the same amount of space given the current energy-density of common matter. But even with the same energy-density, one can still increase processing speed by increasing the size or number of processors, yet they would then take up more space. As well, one can increase the processing-density without limit if one increases the energy-density without limit.

    In the same Inside Science article, Scott Aaronson, an assistant professor of electrical engineering and computer science at the Massachusetts Institute of Technology in Cambridge, is quoted as saying that what this bound means is that "we can't build infinitely fast computers," which is a misstatement of what the bound actually states. The bound actually states that one can build infinitely fast computers if one has an infinite amount of energy.

    For the cosmological limits to computation, see physicist and mathematician Prof. Frank J. Tipler's below paper, which demonstrates that the known laws of physics (i.e., the Second Law of Thermodynamics, general relativity, quantum mechanics, and the Standard Model of particle physics) require that the universe end in the Omega Point (the final cosmological singularity and state of infinite informational capacity identified as being God), and it also demonstrates that we now have the quantum gravity Theory of Everything (TOE):

    F. J. Tipler, "The structure of the world from pure numbers," Reports on Progress in Physics, Vol. 68, No. 4 (April 2005), pp. 897-964. Also released as "Feynman-Weinberg Quantum Gravity and the Extended Standard Model as a Theory of Everything," arXiv:0704.3276, April 24, 2007.

    See also the below resource:

    "Omega Point (Tipler)," Wikipedia, October 30, 2009.

  2. Pardon me, I incorrectly stated that the Levitin and Toffoli paper tightens the Margolus-Levitin bound. Rather, it generalizes it to all cases.

  3. Wow, thanks for all the extra info. I like the idea of having an LHC sized computer to get the energy density needed for a calculation - very Hitch Hiker's Guide!

    From my perspective it feels like we're already hitting a wall with processor speed. They don't seem to get any quicker any more - just more on one chip. This works fine for me because I can process just fine in parallel but it is changing the way I work.

    The AIP site seems to be down to day but I'll take a look at the other articles you mentioned.

  4. You're welcome, Doug Ashton. And yes, do check out Prof. Tipler's Reports on Progress in Physics paper. It's quite mind-blowing.