I wrote this a while ago on Guru3D forums:
March 2012. Let's suppose Nvidia's yields are high enough to both be able to release GK104 and GK110. They release GK104 because it's enough to compete with AMD. They do so at a lower price. 7970 costs $550, 680 costs $500. "680 beats 7970, uses less power, costs less". Nvidia have a powerful chip gathering dust. They don't release it because: why would they sell a card that only a few people would buy for such a premium price? It's a loss? What about people who are running multi-monitor and can afford it? Weird.
Fast forward to Feb. 2013. The 7970 has gotten a GHz edition and such driver improvements it's practically a "refresh" of the 7970 released around one year ago. 7970, $400, 680, $500. Nvidia release Titan @$1000.
How is 2013 any different that 2012? They have a more expensive card, the 680, competing with the 7970, and a $1000 card, for premium buyers.
Why not release a hideously expensive Titan for $2000 in 2012 and simply sell their 680s like hotcakes, while maintaining their brand image and selling the $2000 Titan to those who can buy it?
If GK110 was ready back then, they'd have been selling it, plain and simple.
They're not competing with themselves. Any company should know how to avoid just that. It's an invalid assumption.
Also, when did Nvidia and AMD be so far apart? They've been competing for more than a decade now. GF110 to GK110 would have been more than a generation jump; that's more than double the power, and last time that almost happened, it was with the transition from the 7900GTX to the 8800GTX, which had a whole new architecture to begin with, and was the start of the DX10 era.
They've had the fastest single GPU with each generation since that, until 5xx series, but now, things have been different.
Would have Nvidia been content with their GTX680 being slower AND more expensive than the Radeon HD7970 GHz edition, a point which many had realized and chose AMD this time (including me), while having a powerful GK110-based card that is gathering dust?
I'll add to this now: Why would a 512-bit memory interface be the real deal? Look at the compute cards; which one of them is currently 512-bit and based on GK110? Titan is merely an adaptation of the much more expensive Tesla K20X.
Same logic applies to waiting for the GTX980 with a 2048-bit memory interface with XDR2 memory.