I have been thinking about this very much. A few ideas relevant to the discussion at hand:
1- Have we reached the limit?
Chip manufacturers seem to be defeated. No tangible improvement has been done in the past 10 years or so in clock frequency. It's like they hit a wall and cannot push further.
In order to make up for it, they started selling multi-core processors. I'm not a gamer, I don't use any rendering tool, and last time I checked over 90% of users are like me. I have been monitoring my CPU usage for months now and here's the crazy thing I realized:
I almost never use different cores separately.
As I mentionned in an early post, modern software are to blame. I disagree with rolf when he says sequential programming is "yesterday". I think sequential programming is still (unfortunately) "today".
The bottom line is that I don't have the software to optimize the use of this super hardware.
Today, a theoretical 160 core CPU would not give me any tangible speed upgrade. It would on the other hand (like Tarek mentionned), introduce a whole new set of annoying problems, heating being obviously one of them (another one is monitoring. However Microsoft seems to be working on it).
2- Unlimited cores?
On the other hand, I think rolf is absolutely right to say that parallelism is the future. However is it enough to talk about a 160 core processor? My main argument against this invention is yet another marketing term:
The Cloud.
More than ever, the industry is heading (back) to the era of thin dumb client, and remote server-side processing. The PC as we know it is dying, and therefore the need for super powerful microchips won't be dominant for long. The solution will come from
grid computing. How many cores do you need on a single chip, when you can use virtually an infinite number of machines to do the job? The article I linked to is a great intro and should give a clear idea of why I think multiple core on a single chip is a somewhat limited idea.
3- Limited display?
My argument for the 160 core CPU goes beyond simple parallelism. Despite all that's said about dumb clients and server-side grid computing, the fact remains that we are limited to 2D graphics with a lot of text in it. The way we interact with computers is going to be more graphic intensive with holographic technologies and augmented reality. If, like said in paragraph 1, manufacturers cannot go beyond current limits of CPU clocks, then maybe, massive parallelism
on embedded systems and gadgets should become the norm.
My question
In the light of the current discussion, do you think it is the hardware that drags the software ofrward? Or do you think hardware is merely following the software that optimizes its use? (Note to self, I realize I'm a software guy asking this in the hardware section. They're gonna eat me alive:-/ )