Xsever wroteNo problem. You are welcome.
Don't forget that we hit a wall at the 3-3.2 GHZ mark (stock speed with acceptable heat dissipation). Since then, processors got more cores, but not actual speed.
In addition, we still have to come a long way until we master multi-core programming. Think about it for a second. All the problem solving we do in life is sequential.
How would you split an equation so that it can be solved by all cores at the same time?
That's where concurrent programming excels, though so long as you're still thinking imperative or object oriented, you'll never achieve the full capabilities of concurrency.
Optimal power of concurrency is achieved through functional paradigms.
Real Life Application
http://labs.google.com/papers/mapreduce.html
Though operations are relayed across clusters the concept remains the same when dealing with multiple cores.
For less theoretical and more practical reading, take a look at Microsoft's TPL extensions for .NET 4.0
http://blogs.msdn.com/pfxteam/
I've tested TPL and it looked quite promising, so long as you have a multi-core processor and you set the processor affinity so that processes run on multiple core, you benefit from TPL.
Moreso, you benefit more when it comes to concurrency through programming, when you're dealing with pure languages such as haskell. You can achieve similar results in other programming models as long as you factor out anything that depends on the "State of the world" at time of invocation. (think IO)
Ofcourse, not every task can or more importantly *should* be parallelized, but activities that consume time should be. If you'd like i can go deeper than what was said, but i honestly do not have time right now.