AvoK95 wroteyasamoka wroteAvoK95 wrote@yasamoka
The card doesn't prove the bottleneck , it depends on how much big the card is rendering the picture
What do you mean?
The speed is the determines how much fast the card can render an image, the card will not bottleneck if the image is being rendered on a small 24" screen , but if you render it in a 32" or 40" screen you will notice the difference between having a card running on 8x rather than 16x
What??!
Size never mattered. Resolution does. Articles claim that unless you're running 5760 x 1200, it will not bottleneck. Even then, the bottleneck is small.
Even if I projected each pixel like lego on a football stadium, if it doesn't bottleneck on a 10" 1920 x 1080 monitor, it will not bottleneck on a 500" monitor (except if the graphics card fills the pixels with ink and has to depend on the PCI-E bus to feed it the colors :D).
The only things more noticeable when screen size gets larger is tearing, lag, and stuttering (of different types). Since smoothness of gameplay is directly dependent on the way FPS fluctuates, and theoretically, and probably practically (and as shown by reviews, I don't think fps fluctuations will change if the minimum fps, the critical point, stays almost the same, the average is the same, and the maximum is the same, and the benchmark boils down to PCI-E bandwidth being the only variable with everything else supposedly constant), remains the same, then no, same experience except if you're squeezing out FPS.
The thing is, with each and every card I've seen, PCI-E x16 always performs like marginally (<5%) better than PCI-E x8, in MOST cases. It's impossible that every card is oversaturating x8 by a few percent always, so probably what's happening is that with higher bandwidth, short bursts of data get sent faster, and latency decreases. It's probably the same as what you see when you run x16 x16 native or when you run x16 x16 with an nForce 200 chip. MARGINAL difference (ie. not worth consideration).
And to clear up a misconception that may arise, dual-GPU cards do not saturate the PCI-E lanes any more than their single-GPU counterparts do. This is because the dual-GPU card is essentially 2 x GPUs / PCBs + a PLX chip between them that duplicates the data received from the PCI-E bus and sends it to two cards.
And since graphics cards run in tandem, rendering alternate frames or split frames, they store and use exactly the same data in video memory. Hence the video memory does NOT double.
So all those 4GB 6990s or 3GB 590s or the coming-soon (hopefully) 6GB 7990s are nothing more than 2GB, 1.5GB, and 3GB respectively USABLE video memory. If a game uses more than 1.5GB of VRAM, the GTX 590 will be memory-limited. So beware.
And honestly, this is why I respect AMD's new strategy of fitting the 7970 with 3GB of RAM. Multi-monitor / CrossFire or BOTH can finally flex their muscles.
@Shant: if you can afford to blow $1000 on GPUs, and you can guarantee above 60FPS minimum to avoid any possible microstuttering, would you buy a single 580, or 2 x 570s? (or with AMD's new cards when the 7950 gets released)