First of all, I have 2 x 7970s. Me and Khaled both made our decision to go CrossFire together. Our plan is to maintain 60FPS on any game that comes out, at maximum graphics detail, @1080p.
Here's some stuff you need to know. It might induce a headache, but it should help you with your decision
Nvidia's 670/680 Kepler GPUs have hardware frame metering. This means that frames are evenly distributed with multi-GPU SLi setups, such that there is effectively no microstutter with running two of these cards in SLI, whether above or below 60, or above or below your refresh rate @120Hz.
AMD cards do not. Neither do they seem to have anything in place in their drivers to reduce / eliminate microstutter. Nvidia - AMD, 1 - 0 in that regard.
Check this wikipedia article:
http://en.wikipedia.org/wiki/Micro_stuttering
"As of May 2012, with the latest release of hardware and drivers from nVidia and AMD, AMD's Radeon HD 7000 series is severely more affected by micro stuttering than nVidia's GeForce 600 Series. In tests performed in Battlefield 3, a configuration with two GeForce GTX 680 in SLi-mode showed a 7% variation in frame delays, compared to 5% for a single GTX 680, indicating virtually no micro stuttering at all. A configuration with two Radeon HD 7970 in CrossFireX-mode, on the other hand, showed an 85% variation in frame delays, compared to 7% for a single card, indicating large amounts of micro stuttering. These results are reflected in the perceptual experience when looking at the outputted video.[6]"
I've seen this with my own eyes. With 2 7970s overclocked to 1125MHz, running Heaven Benchmark 3.0, maxed out, with 8xAA and Tessellation at Extreme, the framerate runs slightly higher than 60 at the beginning of the benchmark, soars throughout most of the benchmark, and dips to 50s in (very) few heavy places. The framerate seems like 30s-40s due to the stutter. Microstutter is something like this: if you don't notice it, you're living in bliss. If you do, you can't "unsee" it. However, long exposure to microstutter, then shifting back to a single more powerful (future) card would show you what (smoothness) you've been missing all the time.
Drivers have also improved for both AMD and Nvidia, especially for AMD, with the 12.11 Betas reclaiming the title of the single fastest single GPU for AMD. Crossfire vs. SLI, Crossfire scales better generally, in almost all cases I've seen, and the performance gap widens.
This is an example of a 7950CF setup soundly beating a GTX680SLI setup in almost all cases tested, @5760 x 1080 (triple monitor):
http://forums.overclockers.co.uk/showthread.php?t=18455827
Now granted, we aren't running triple monitors here, but the idea is this: current games that show the 79xx cards' muscle @ higher resolutions resemble future games that run at lower resolution. Meaning, with higher resolution comes more stress on the shaders, memory bandwidth, memory size, etc...future games would add stress to those same components as well.
Certain games like Dirt Showdown and Sniper Elite V2, the difference between 7970 and 680 is huge:
http://www.techpowerup.com/reviews/AMD/Catalyst_12.11_Performance/5.html
I've broken down the difference in that review in here (at around half the page):
http://forums.guru3d.com/showthread.php?t=374341&page=3
Now, back to that Wikipedia article. One last statement in the article is this:
"The software program RadeonPro can be used to significantly reduce or eliminate the effects of micro-stuttering when using AMD graphics cards in CrossFire.[7][8]"
This review should show what this statement means:
http://www.tomshardware.com/reviews/radeon-hd-7990-devil13-7970-x2,3329-8.html
For some history, check out an older article, though it doesn't bear much importance to the present:
http://www.tomshardware.com/reviews/radeon-geforce-stutter-crossfire,2995.html
Now this is the deal:
1) If you're running without Vsync, or without Dynamic Frame Rate Control / whatever the reviewers use, in RadeonPro, you'd get microstutter. So forget about running the regular way without Vsync or RadeonPro's options.
2) One problem with Vsync, on a single card, is that if your framerate dips below your monitor's refresh rate (let's assume 60Hz), to 59 for example, you directly move to 30, then down to 20, etc... (divisor of the refresh rate). This is because Vsync, Vertical Synchronization, outputs frames from the graphics card when the monitor is ready to refresh. In case the card can't render a frame in time before the next refresh, Vsync forces the graphics card to wait for the next refresh, and that's why you get your FPS halved, when dipping from 60FPS. Vsync's advantages are, clearly, tear-free, smooth gaming.
3) Triple buffering is a feature that allows the card, in case a frame was not finished in time, to provide the monitor with an alternative frame, keeping the framerate up. With Triple buffering, when you dip to 59, you don't go to 30. You stay at 59 (sliiiightly less, if anything, due to certain overheads).
4) Two cards use triple buffering by default, as there is one front buffer, and two back buffers (one per card). This means that enabling Vsync with two cards does not result in your FPS dropping sharply if you dip below your refresh rate.
5) Vsync has been shown (tested) to eliminate microstuttering, but the results have only been validated with the framerate running ABOVE 60FPS. I haven't tested much yet, but it seems that VSync also reduces / eliminated microstuttering at framerates less than 60, but it's hard to test because of the variable nature of the framerate below 60FPS.
6) It's also been shown (tested) that framerate caps also eliminate microstutter, so do RadeonPro's feature mentioned above in that review. Framerate caps, say to 60, eliminate microstutter, but in doing so, induce slight occasional tearing at the top or bottom of the frame, and this is due to the framerate cap not being as accurate in timing as Vsync itself, as the refresh rate is not exactly 60 (it's CLOSE to 60).
7) You might be asking: why would I use a framerate cap if Vsync already does the job? Well, Vsync introduces input lag. With Vsync on, and no framerate cap, you're getting quite high (relative to me) input lag from Vsync alone. With a framerate cap at 60, you get MUCH less input lag with Vsync on, and with a framerate cap at 59, you get almost no input lag at all with Vsync on. So far, this method: 59FPS cap + Vsync on is my favorite, and I have not yet tested whether dipping below 60, with these on, introduces any microstutter.
8) I have not yet tested RadeonPro's features. If they do indeed provide proper frame timing (like framerate cap and Vsync) to eliminate microstutter, and have accurate timing so as to not introduce microstutter, while magically avoiding the input lag issues of Vsync, it would be golden. Dynamic FRC / Vsync is also very nice, as it seems to cap at 60 / Vsync when you're running at 60, then if you dip below 60, it caps again at a lower framerate to AGAIN eliminate microstutter. GENIUS!
9) When the GPU usage is below 99%, it seems that microstutter is gone COMPLETELY! So let's say you have a game that didn't scale well with multi-GPU, and dipped below 60, to 45 for example, you don't need to worry about microstutter, as it's not going to be there. In this case, to ensure that you have 60FPS constant with a dual-GPU setup, make sure that a single card is getting 30FPS @99% GPU usage (or less, of course!), as that would mean that two cards would get 60FPS @99% GPU usage, or <60FPS @<99% GPU usage, which means you wouldn't have microstutter below 60. You ONLY need to worry if one card is not maintaining 30FPS @99% GPU usage, and with that, we'd only worry if it's confirmed that Vsync on / DFC on with the framerate dipping below 60FPS would cause microstutter.
To recap now: multi-card introduces microstutter. FPS caps eliminate MS but introduce tearing. Vsync eliminates MS but introduces input lag. An FPS cap reduces (almost to the point of eliminating) input lag caused by FPS. HOWEVER, it is untested whether dipping below 60, without RadeonPro, with Vsync on and FPS cap set, would introduce microstutter. Also, FRC has not yet been tested, but I'm hopeful this should be a silver (not golden) solution to eliminating microstutter while still providing a good enough FPS (which you set according to the game you play). Disadvantage would be that you'd not get FPS between your refresh rate and that FPS cap. Either refresh rate FPS, or the FPS cap FPS.
Maintaining 30FPS on a single card would mean that you'd get 60FPS on dual-card without microstutter, OR <60FPS on a dual-card WITHOUT microstutter.
I'll hopefully be testing RadeonPro on Crossfire 7950s (Khaled's) or Crossfire 7970s in the coming few days.
I'm also hopeful for AMD to come up with a solution, as it seems that the hardware frame rate metering over at Nvidia's side is doing the job well. It seems to be timing each frame accurately so that no microstutter is present. However, what does it do? Is the worse scaling, compared to Crossfire, the trade-off to be had when providing a smooth experience with no microstutter?
Of course, it makes sense to run at 120Hz with games that run above 60FPS on a single card, as well.
If there's anything you didn't understand in what I said, I'll be glad to recap.
Peace!