• Hardware
  • Why have CPUs been limited to 3.5GHz for so long?

A while ago, I complained that no real advance in CPU frequency has been made in 10 years. Adding more cores (virtual ones, no less!) will not drastically improve computing performance on the machine. It will only allow parallel computing (which if used correctly could improve overall performance but won't change the order of magnitude of computing speed).

It comes with very little surprise that the GPU is showing steadier results, especially in the field of scientific calculation and cryptography.

I came across this page which gives an explanation of why cpu clock has been limited for so long. I hope you'll find it as interesting as I did.

Finally, a question. Are we close to see some real world applications for memristors as computer components? Bonus points if your answer comes with a link to a reliable source of info.
Heating.

The challenge isn't to find a technology that replaces the traditional concept of transistors but the challenge is how to make such technology mainstream and available at low cost.

moderator edit: Please avoid double posts and /or simple one liners. Thanks. You can use the 'Edit' button to modify earlier posts
I didn't realize that there was a theoretical limit--I thought the main reason they stopped increasing speeds because of the heat issue; that and adding more processors on a CPU made it more efficient. I guess I was mistaken on both counts. I guess oct-cores will be next on the list.

As for your second point, according to this Optical fiber microresonators show promise as optical memory, there may be some specialized applications using this in the next few years. Next stop is the optical computer, followed by the holy grail of computing: the quantum computer.
Its the way we designed pcs and the accompanied instructions. Cpus are getting smaller and we can add more transistors there, thus increasing the number of cores. We have indeed reached the pinnacle in IPC (instructions per core) as we add new instructions to speed things up. We only speed them up slightly.

We reached the end of moor's law. More transistors can help but will not double performance. It wont give 20% more that's for sure. Only time will tell. We have to wait for a breakthrough in cpus, am hoping it happens soon. It wont be btw, i think nvidia will push things as they really are able to do so many things with their gpus, if other companies follow them and leave intel for a second :P. But monopoly is played and people feel comfortable with what they know and have. GPGPU is the next big thing, and it will not take long for it to flex its muscles, 2013 or so. Intel is decreasing its tdp and power consumption limits and going to compete with arm michroarch. While nvidia will use arm to compete in the x86 stage.

Intel has dominated computation and thinking small (handheld devices/tablets/gps..etc), and one has dominated the latter and looking at that big piece of the cake (could be a lie).

P.S. I dont know if i make sense or not, strayed from the topic or not. But am too tired from work and sleep deprived so take it easy on me :).

Edit: Yea also to answer your question about 3.5ghz, now since both companies (intel and amd) are using turbo technologies. they give you 3.5ghz stock and 3.8-3.9 in turbo. And ppl will be like ZOMG ~4ghz, to an ocer its worthless but to the norms its a marketing slamdunk.
rahmu wroteFinally, a question. Are we close to see some real world applications for memristors as computer components? Bonus points if your answer comes with a link to a reliable source of info.
Apparently the HP laps team is working on the memristor technology and plan to release a new type of memory in 2013.
article: Memristor Memory Readied for Production