You are not logged in.
With the speed of computers so regularly seeing dramatic increases in their processing speed, it seems that it shouldn't be too long before the machines become infinitely fast -- except they can't. A pair of physicists has shown that computers have a speed limit as unbreakable as the speed of light. If processors continue to accelerate as they have in the past, we'll hit the wall of faster processing in less than a century.
Intel co-founder Gordon Moore predicted 40 years ago that manufacturers could double computing speed every two years or so by cramming ever-tinier transistors on a chip. His prediction became known as Moore's Law, and it has held true throughout the evolution of computers
-- the fastest processor today beats out a ten-year-old competitor by a factor of about 30.If components are to continue shrinking, physicists must eventually code bits of information onto ever smaller particles. Smaller means faster in the microelectronic world, but physicists Lev Levitin and Tommaso Toffoli at Boston University in Massachusetts, have slapped a speed limit on computing, no matter how small the components get.
"If we believe in Moore's law...then it would take about 75 to 80 years to achieve this quantum limit," Levitin said.
"No system can overcome that limit. It doesn't depend on the physical nature of the system or how it's implemented, what algorithm you use for computation ... any choice of hardware and software," Levitin said. "This bound poses an absolute law of nature, just like the speed of light."
Scott Aaronson, an assistant professor of electrical engineering and computer science at the Massachusetts Institute of Technology in Cambridge, thought Levitin's estimate of 75 years extremely optimistic.
Moore's Law, he said, probably won't hold for more than 20 years.In the early 1980s, Levitin singled out a quantum elementary operation, the most basic task a quantum computer could carry out. In a paper published today in the journal Physical Review Letters, Levitin and Toffoli present an equation for the minimum sliver of time it takes for this elementary operation to occur. This establishes the speed limit for all possible computers.
Using their equation, Levitin and Toffoli calculated that, for every unit of energy, a perfect quantum computer spits out ten quadrillion more operations each second than today's fastest processors."It's very important to try to establish a fundamental limit -- how far we can go using these resources," Levitin explained.
The physicists pointed out that technological barriers might slow down Moore's law as we approach this limit. Quantum computers, unlike electrical ones, can't handle "noise" -- a kink in a wire or a change in temperature can cause havoc. Overcoming this weakness to make quantum computing a reality will take time and more research.
As computer components are packed tighter and tighter together, companies are finding that the newer processors are getting hotter sooner than they are getting faster. Hence the recent trend in duo and quad-core processing; rather than build faster processors, manufacturers place them in tandem to keep the heat levels tolerable while computing speeds shoot up. Scientists who need to churn through vast numbers of calculations might one day turn to superconducting computers cooled to drastically frigid temperatures. But even with these clever tactics, Levitin and Toffoli said, there's no getting past the fundamental speed limit.
Aaronson called it beautiful that such a limit exists.
"From a theorist's perspective, it's good to know that fundamental limits are there, sort of an absolute ceiling," he said. "You may say it's disappointing that we can't build infinitely fast computers, but as a picture of the world, if you have a theory of physics allows for infinitely fast computation, there could be a problem with that theory."
Source: Live Science : http://www.livescience.com/technology/0 … speed.html
Thanks for the info Xsever
No problem. You are welcome.
Don't forget that we hit a wall at the 3-3.2 GHZ mark (stock speed with acceptable heat dissipation). Since then, processors got more cores, but not actual speed.
In addition, we still have to come a long way until we master multi-core programming. Think about it for a second. All the problem solving we do in life is sequential.
How would you split an equation so that it can be solved by all cores at the same time?
Holly banana,75 years, would be dead by that time.
Everything got limits, you can keep shrinking stuff.
As for multi-core programming, yes, i think there's some serious limitations in it such as the example you gave, but am sure someone will come up with something to link them cores together.
Thanks XServer, this is informative but it did let me down :).
dont worry , they will always come up with new things we did not expect before :) but as far as the multicore the computer does each operation on a separate core i dont think they have reached split capability and the idea of it sounds crazy :)
No problem. You are welcome.
Don't forget that we hit a wall at the 3-3.2 GHZ mark (stock speed with acceptable heat dissipation). Since then, processors got more cores, but not actual speed.
In addition, we still have to come a long way until we master multi-core programming. Think about it for a second. All the problem solving we do in life is sequential.
How would you split an equation so that it can be solved by all cores at the same time?
That's where concurrent programming excels, though so long as you're still thinking imperative or object oriented, you'll never achieve the full capabilities of concurrency.
Optimal power of concurrency is achieved through functional paradigms.
Real Life Application
http://labs.google.com/papers/mapreduce.html
Though operations are relayed across clusters the concept remains the same when dealing with multiple cores.
For less theoretical and more practical reading, take a look at Microsoft's TPL extensions for .NET 4.0
http://blogs.msdn.com/pfxteam/
I've tested TPL and it looked quite promising, so long as you have a multi-core processor and you set the processor affinity so that processes run on multiple core, you benefit from TPL.
Moreso, you benefit more when it comes to concurrency through programming, when you're dealing with pure languages such as haskell. You can achieve similar results in other programming models as long as you factor out anything that depends on the "State of the world" at time of invocation. (think IO)
Ofcourse, not every task can or more importantly *should* be parallelized, but activities that consume time should be. If you'd like i can go deeper than what was said, but i honestly do not have time right now.
Last edited by xterm (October 14 2009)
nice interesting topic.
sure we hit a 3-3.2GHZ wall for now but lots of CPUs are being overclocked to 4.8GHZ but for sure check out the cooling they are using
i think when microprocessor become smaller will break that 3.2GHZ wall with the same heat amount but of course everything has limits
check those out:
http://www.overclock.net/intel-motherbo … ng-i5.html
http://hothardware.com/Articles/Core-i7 … en/?page=4
and the attempt to break the 5.0 GHZ http://www.tgdaily.com/content/view/32370/135/
Plus i want to argue a bit about this speed limit they are talking about (it is some thinking, im not a physicist)
all the equations, the calculation are taken/deduced form Einstein's theory of relativity in witch he tells us that we cannot get faster than the speed of light. But this is just a theory; i mean what if it is wrong ? in his theory Einstein say that if something get close to the speed of light its mass will become infinite and there is no power that can push it more so it can break this barrier (we need infinite energy). but in other hand i want to make a comparison between the speed of light and the speed of the sound. we have managed to break the speed of sound regarding at this speed air particles are extremely hard and create huge pressure and yet with enough power we did that. Maybe it is just we don't that this power yet, maybe we can create such a power we just need the right materials (like anti-materials) . After all many scientist deny and are try to reach light speed.
and if we are talking about infinite energy needed, i can say what if infinite is real ? i mean no one ever defines what infinite is and infinite is in someway relative.
dont worry , they will always come up with new things we did not expect before :) but as far as the multicore the computer does each operation on a separate core i dont think they have reached split capability and the idea of it sounds crazy :)
They have. Long time ago. Consult my previous post.
Last edited by xterm (October 14 2009)
i noticed :) but its still not for end-users i guess ? thats why i never heard of it
i noticed :) but its still not for end-users i guess ? thats why i never heard of it
Correct. *Most* applications are tuned towards multi-threaded approach.
see thanks for the extra info though , but i guess complex programing will not be the gate to super fast computing .
too bad programmers won't have a job if we reach speed limit :(
trust me we will never reach speed limit ;)
Don't forget Quantum Computing. It's still in its early days, but it could lead to something promising.
actually fujitsu made a single core proccessor with more than 100 GHZ
Any links?
nour , if Fujitsu did make such a possessor then all our problems would be solved and we would not have been discussing the issue , how come all of us tech-freaks did not hear about this thing ? ! i guess its a hoax or maybe not made yet or they are experimenting it . we have never ever reached higher than 5 ghz computing power and that using liquid nitrogen for cooling !! how do you explain the heat dissipation in a 100 ghz model ?
Hey Jad,
It seems your brother ba3do msayyaf !
X , check the post about the renewable energy i replied to your post :) , eh shaklo 3am yi7lam aw shee
Last edited by jadberro (October 20 2009)
actually fujitsu made a single core proccessor with more than 100 GHZ
there is wrong stuff in your info cause yeah fudjitsu did show a lot of stuff in may in Tokyo's International Forum in Yurakucho. One of them was the claimed world fastest CPU but for sure not a single core it is 8 cores and they did not reveal any info about clock speed, ....
take a look at this translated page no one understand Japanese.
Hey Jad,
It seems your brother ba3do msayyaf !
http://www.crunchgear.com/2009/05/14/be … processor/
eh ba3edne msayyaf ! BUT SO IS FUJITSU!
Fujitsu yesterday took the wraps off a new CPU made for supercomputers that can perform 128 billion computations per second
128 BILLION COMPUTATIONS PER SECOND! which is equivalent to 128 GHz
btw the PS3 has seven cores and 3.2 Ghz proccessing power! cool stuff! nd it costs cheaper than the quad extreme by itself!
Xsever wrote:Hey Jad,
It seems your brother ba3do msayyaf !
http://www.crunchgear.com/2009/05/14/be … processor/
eh ba3edne msayyaf ! BUT SO IS FUJITSU!
Fujitsu yesterday took the wraps off a new CPU made for supercomputers that can perform 128 billion computations per second
128 BILLION COMPUTATIONS PER SECOND! which is equivalent to 128 GHz
this is the cpu i am talking about !
but dude remember is it 8 real cores !!!!!!!! so its speed is equiv to 128GHz but not its speed
if you have a core 2 duo @ 2.4 GHz you have a 6.98Ghz equivalent speed
@GN90 yea it is ;) ok then its at LEAST 20 GHz per proccessor core! THATS STILL REALLY IMPRESSIVE!! its 2.5 times better than intels best attempt! ;)
ya i know but look at the size of that thing sure it is not a commercial thing.