CPUs, GPUs, Motherboards, and Memory

For an article like this, getting a range of CPUs, which includes the most common and popular, is very important. I have been at AnandTech for just over two years now, and in that time we have had Sandy Bridge, Llano, Bulldozer, Sandy Bridge-E, Ivy Bridge, Trinity and Vishera, of which I tend to get supplied the top end processors of each generation for testing. (As a motherboard reviewer, it is important to make the motherboard the limiting factor.) A lot of users have jumped to one of these platforms, although a large number are still on Wolfdale (Core2), Nehalem, Westmere, Phenom II (Thuban/Zosma/Deneb) or Athlon II.

I have attempted to pool all my AnandTech resources, contacts, and personal resources together to get a good spread of the current ecosystem, with more focus on the modern end of the spectrum. It is worth noting that a multi-GPU user is more likely to have the top line Ivy Bridge, Vishera or Sandy Bridge-E CPU, as well as a top range motherboard, rather than an old Wolfdale. Nevertheless, we will see how they perform. There are a few obvious CPU omissions that I could not obtain for this first review which will hopefully be remedied over time in our next update.

The CPUs

My criteria for obtaining CPUs was to use at least one from the most recent architectures, as well as a range of cores/modules/threads/speeds. The basic list as it stands is:

AMD

Name Platform /
Architecture
Socket Cores / Modules
(Threads)
Speed Turbo L2/L3 Cache
A6-3650 Llano FM1 4 (4) 2600 N/A 4 MB / None
A8-3850 Llano FM1 4 (4) 2900 N/A 4 MB / None
A8-5600K Trinity FM2 2 (4) 3600 3900 4 MB / None
A10-5800K Trinity FM2 2 (4) 3800 4200 4 MB / None
Phenom II X2-555 BE Callisto K10 AM3 2 (2) 3200 N/A 1 MB / 6 MB
Phenom II X4-960T Zosma K10 AM3 4 (4) 3200 N/A 2 MB / 6 MB
Phenom II X6-1100T Thuban K10 AM3 6 (6) 3300 3700 3 MB / 6 MB
FX-8150 Bulldozer AM3+ 4 (8) 3600 4200 8 MB / 8 MB
FX-8350 Piledriver AM3+ 4 (8) 4000 4200 8 MB / 8 MB

Intel

Name Architecture Socket Cores
(Threads)
Speed Turbo L2/L3 Cache
E6400 Conroe 775 2 (2) 2133 N/A 2 MB / None
E6700 Conroe 775 2 (2) 2667 N/A 4 MB / None
Celeron G465 Sandy Bridge 1155 1 (2) 1900 N/A 0.25 MB / 1.5 MB
Core i5-2500K Sandy Bridge 1155 4 (4) 3300 3700 1 MB / 6 MB
Core i7-2600K Sandy Bridge 1155 4 (8) 3400 3800 1 MB / 8 MB
Core i3-3225 Ivy Bridge 1155 2 (4) 3300 N/A 0.5 MB / 3 MB
Core i7-3770K Ivy Bridge 1155 4 (8) 3500 3900 1 MB / 8 MB
Core i7-3930K Sandy Bridge-E 2011 6 (12) 3200 3800 1.5 MB / 12 MB
Core i7-3960X Sandy Bridge-E 2011 6 (12) 3300 3900 1.5 MB / 15 MB
Xeon X5690 Westmere 1366 6 (12) 3467 3733 1.5 MB / 12 MB

A small selection

There omissions are clear to see, such as the i5-3570K, a dual core Llano/Trinity, a dual/tri module Bulldozer/Piledriver, i7-920, i7-3820, or anything Nehalem. These will hopefully be coming up in another review.

The GPUs

My first and foremost thanks go to both ASUS and ECS for supplying me with these GPUs for my test beds. They have been in and out of 60+ motherboards without any issue, and will hopefully continue. My usual scenario for updating GPUs is to flip AMD/NVIDIA every couple of generations – last time it was HD5850 to HD7970, and as such in the future we will move to a 7-series NVIDIA card or a set of Titans (which might outlive a generation or two).

ASUS HD 7970 (HD7970-3GD5)

The ASUS HD 7970 is the reference model at the 7970 launch, using GCN architecture, 2048 SPs at 925 MHz with 3GB of 4.6GHz GDDR5 memory. We have four cards to be used in 1x, 2x, 3x and 4x configurations where possible, also using PCIe 3.0 when enabled by default.

ECS GTX 580 (NGTX580-1536PI-F)

ECS is both a motherboard manufacturer and an NVIDIA card manufacturer, and while most of their VGA models are sold outside of the US, some do make it onto etailers like Newegg. This GTX 580 is also a reference model, with 512 CUDA cores at 772 MHz and 1.5GB of 4GHz GDDR5 memory. We have two cards to be used in 1x and 2x configurations at PCIe 2.0.

The Motherboards

The CPU is not always the main part of the picture for this sort of review – the motherboard is equally important as the motherboard dictates how the CPU and the GPU communicate with each other, and what the lane allocation will be. As mentioned on the previous page, there are 20+ PCIe configurations for Z77 alone when you consider some boards are native, some use a PLX 8747 chip, others use two PLX 8747 chips, and about half of the Z77 motherboards on the market enable four PCIe 2.0 lanes from the chipset for CrossFireX use (at high latency).

We have tried to be fair and take motherboards that may have a small premium but are equipped to deal with the job. As a result, some motherboards may also use MultiCore Turbo, which as we have detailed in the past, gives the top turbo speed of the CPU regardless of the loading.

As a result of this lane allocation business, each value in our review will be attributed to both a CPU, whether it uses MCT, and a lane allocation. This would mean something such as i7-3770K+ (3 - x16/x8/x8) would represent an i7-3770K with MCT in a PCIe 3.0 tri-GPU configuration. More on this below.

For Sandy Bridge and Ivy Bridge: ASUS Maximus V Formula, Gigabyte Z77X-UP7 and Gigabyte G1.Sniper M3.

The ASUS Maximus V Formula has a three way lane allocation of x8/x4/x4 for Ivy Bridge, x8/x8 for Sandy Bridge, and enables MCT.

The Gigabyte Z77X-UP7 has a four way lane allocation of x16/x16, x16/x8/x8 and x8/x8/x8/x8, all via a PLX 8747 chip. It also has a single x16 that bypasses the PLX chip and is thus native, and all configurations enable MCT.

The Gigabyte G1.Sniper M3 is a little different, offering x16, x8/x8, or if you accidentally put the cards in the wrong slots, x16 + x4 from the chipset. This additional configuration is seen on a number of cheaper Z77 ATX motherboards, as well as a few mATX models. The G1.Sniper M3 also implements MCT as standard.

For Sandy Bridge-E: ASRock X79 Professional and ASUS Rampage IV Extreme

The ASRock X79 Professional is a PCIe 2.0 enabled board offering x16/x16, x16/x16/x8 and x16/x8/x8/x8.

The ASUS Rampage IV Extreme is a PCIe 3.0 enabled board offering the same PCIe layout as the ASRock, except it enables MCT by default.

For Westmere Xeons: The EVGA SR-2

Due to the timing of the first roundup, I was able to use an EVGA SR-2 with a pair of Xeons on loan from Gigabyte for our server testing. The SR-2 forms the basis of our beast machine below, and uses two Westmere-EP Xeons to give PCIe 2.0 x16/x16/x16/x16 via NF200 chips.

For Core 2 Duo: The MSI i975X Platinum PowerUp and ASUS Commando (P965)

The MSI is the motherboard I used for our quick Core 2 Duo comparison pipeline post a few months ago – I still have it sitting on my desk, and it seemed apt to include it in this test. The MSI i975X Platinum PowerUp offers two PCIe 1.1 slots, capable of Crossfire up to x8/x8. I also rummaged through my pile of old motherboards and found the ASUS Commando with a CPU installed, and as it offered x16+x4, this was tested also.

For Llano: The Gigabyte A75-UD4H and ASRock A75 Extreme6

Llano throws a little oddball into the mix, being a true quad core unlike Trinity. The A75-UD4H from Gigabyte was the first one to hand, and offers two PCIe slots at x8/x8. Like the Core 2 Duo setup, we are not SLI enabled.

After finding an A8-3850 CPU as another comparison point for the A6-3650, I pulled out the A75 Extreme6, which offers three-way CFX as x8/x8 + x4 from the chipset as well as the configurations offered by the A75-UD4H.

For Trinity: The Gigabyte F2A85X-UP4

Technically A85X motherboards for Trinity support up to x8/x8 in Crossfire, but the F2A85X-UP4, like other high end A85X motherboards, implements four lanes from the chipset for 3-way AMD linking. Our initial showing on three-way via that chipset linking was not that great, and this review will help quantify this.

For AM3: The ASUS Crosshair V Formula

As the 990FX covers a lot of processor families, the safest place to sit would be on one of the top motherboards available. Technically the Formula-Z is newer and supports Vishera easier, but we have not had the Formula-Z in to test, and the basic Formula was still able to run an FX-8350 as long as we kept the VRMs cool as a cucumber. The CVF offers up to three-way CFX and SLI testing (x16/x8/x8).

The Memory

Our good friends at G.Skill are putting their best foot forward in supplying us with high end kits to test. The issue with the memory is more dependent on what the motherboard will support – in order to keep testing consistent, no overclocks were performed. This meant that boards and BIOSes limited to a certain DRAM multiplier were set at the maximum multiplier possible. In order to keep things fairer overall, the modules were adjusted for tighter timings. All of this is noted in our final setup lists.

Our main memory testing kit is our trusty G.Skill 4x4GB DDR3-2400 RipjawsX kit which has been part of our motherboard testing for over twelve months. For times when we had two systems being tested side by side, a G.Skill 4x4GB DDR3-2400 Trident X kit was also used.

For The Beast, which is one of the systems that has the issue with higher memory dividers, we pulled in a pair of tri-channel kits from X58 testing. These are high-end kits as well, currently discontinued as they tended to stop working with too much voltage. We have sets of 3x2GB OCZ Blade DDR3-2133 8-9-8 and 3x1GB Dominator GT DDR3-2000 7-8-7 for this purpose, which we ran at 1333 6-7-6 due to motherboard limitations at stock settings.

To end, our Core 2 Duo CPUs clearly gets their own DDR2 memory for completeness. This is a 2x2GB kit of OCZ DDR2-1033 5-6-6.

 

 

Choosing a Gaming CPU: Single + Multi-GPU at 1440p, 400+ Data Points To Consider Testing Methodology, Hardware Configurations, and The Beast
Comments Locked

242 Comments

View All Comments

  • Dribble - Wednesday, May 8, 2013 - link

    Mmm, not done by a true gamer as it doesn't address a number of things:

    1) Not everyone wants to run the game at max settings getting 30fps. Many want 60, or in my case 120fps as that's what my monitor can do. To do this we turn down graphics a bit, but this makes us much more likely to be cpu bound. Remember generally you can turn down the graphics settings to ease strain on gpu for higher fps, but cpu settings are much more fixed - you can't lower the resolution or turn of AA to fix cpu bottlenecks!

    2) Min fps is key, not average fps. This I learned years ago playing ut2004. That game might return 60fps most of the time while admiring the scenery, but when you were in the middle of an intense fight with multiple players fps could half or even quarter. It's obviously in the middle of a firefight that you most need the high fps to win.

    3) There's a huge difference between single player games and online. Basically most single player games also run on consoles so they run like a dream on most PC cpu's as even the slower ones are more powerful. However go onto a 64 player server (which a console can't do) and watch the fps tank - suddenly the cpu is being worked much harder. BF3, UT engined games all do this when you get on a large server.

    Hence your conclusions are wrong imo. You want an o/c intel quad core - i5 750 o/c to about 4ghz+ or better really. Why that - because basically it's still not far of as fast as you'll get - the latest intel cpu's still have 4 cores, ipc isn't much better and only clock a little higher then that.
  • maximumGPU - Wednesday, May 8, 2013 - link

    i'm pretty sure there's a sizeable jump moving from an i5 750 to 3570K, in both ipc and potential for overclock.
  • Dribble - Wednesday, May 8, 2013 - link

    I suppose it depends on what you define "sizable" as? Perhaps a i2500K would be better, but even with a i5 750 @4ghz vs a i3570K@4.5ghz we aren't talking huge increases in cpu power - 25-30% maybe (hyperthreading aside which generally isn't much help in games).
  • IanCutress - Wednesday, May 8, 2013 - link

    I very much played a lot of clan-based BF2/BF2142 for a long while. 'True Gamer' is often a misnomer anyway, perpetuated by those who want to categorize others or want to announce their own true nature.

    1) The push will always be towards the highest settings at which you can hit that 60-120 FPS ideal. If some of the games we see today can't hit 60 on a single GPU at 1440p, at 4K it's all going to tank. Many games tested in this review hit 60+ above two GPUs which was the point of this article to begin with.

    2) Min FPS falls under the issue of statistical reporting. If you run a game benchmark (Dirt3) and in one scene of genuine gameplay there is a 6-car pileup, it would show the min FPS of that one scene. So if that happened on an FX-8350 and min-FPS was down to 20 FPS when others didn't have this scene were around 90 FPS for minimum, how is that easily reported and conveyed in a reasonable way to the public? A certain amount of acknowledgement is made on the fact that we're taking overall average numbers, and that users would apply brain matter with regard to an 'average minimum'.

    3) This is a bit obvious, but try doing 1400 tests on 64 player servers and keeping any level of consistency. If this is your usage scenario, then you'll know what concessions you will have to make.

    An i5-750 using an older chipset also suffers from less of the newer features - native SATA 6Gbps for example for an awesome RAID-0 setup. This could be the limiting factor in your gaming PC. We will be testing that generation for the next update of this testing :)

    As written in the review, the numbers we have taken are but a small subset of everything that is possible, and we can only draw conclusions from the numbers we have taken. There are other numbers available online which may be more relevant to you, but these are the ones under our test-bed situations. Your setup is different from someone elses, which is a different usage scenario from others - testing them all would require a few years in Narnia. But suggestions are more than welcome!

    Ian
  • darckhart - Wednesday, May 8, 2013 - link

    I agree with Dribble's post above, but your reply was also well thought and written, just like your article. Keep up the good work. Thanks!
  • Dribble - Wednesday, May 8, 2013 - link

    I suppose "true gamer" does sound a bit elitist, by that I really meant someone who plays not benchmarks. I agree it's hard to test min fps in 64 player BF3 matches, but that's the sort of moment when your choice of cpu matters, not in for example in a canned off-line BF3 benchmark. As you are advising on cpu buying choices for gaming it is pretty important.

    My personal experience is the offline canned benchmarks giving average fps say you require a cpu a lot less powerful then you really do when you take your fancy new rig online in the latest super popular multi player game. Particularly as in that game you pretty quickly start playing to win and are willing to sacrifice some fancy settings to get the fps up so you don't loose again as you try to hit that annoying fast moving 15 year old while your fps is tanking :)

    Therefore while it's fine to advise those people who only want to play offline console ports using benchmarking as you did, it's just doesn't work for the rest of us.
  • JarredWalton - Wednesday, May 8, 2013 - link

    It sounds more than a bit elitist: it is elitist. For every gamer that spends 10-20 hours of time each week in multiplayer gaming (MMORPG, or whatever FPS you want to name, or World of Tanks, etc.), there are likely at least ten times as many gamers that generally stick to single player games. What's more, that sort of definition of "true gamer" may as well just say "high school or early 20s with little life outside of the digital realm." Yes, that's a relatively big demographic, but there are many 20, 30, 40, and even 50-somethings that still play a fair amount of games, but never bother with the multiplayer stuff. In fact, I'd say that of the 30+ year old people I know well, less than 1% would meet your "true gamer" requirement, while 5% would still be "gamers".

    Says the 39 year old fuddy duddy.
  • Spunjji - Wednesday, May 8, 2013 - link

    The purpose of this article is to give a scientific basis for comparison within the boundaries of realistic testing deadlines. I would be interested to see you produce something as statistically rigorous based on performance numbers taken from online gaming. If you managed to do it before said numbers became irrelevant due to changes to the game code I would be utterly flabbergasted.
  • Dribble - Thursday, May 9, 2013 - link

    No, the purpose of this article is to recommend cpu's for gaming.
  • frozen ox - Thursday, May 9, 2013 - link

    There is no way to recreate or capture all the variables/scenarios to repeatedly benchmark a firefight in BF3 across multiple systems. The results from this hardware review are relevant, because they are easily repeatable by others and provide a fair baseline to compare systems. The point of this study is not what CPU do I need to play BF3 or Crysis at max settings, it's how much bandwidth bottleneck is going on with a single GPU setup? What happens in reality with multi-GPU setups? How well does the new AMD architecture (because "true gamers" want to save $$ to buy games) compare to Intel?

    What you have to do, as a "true gamer" and someone who has enough wits about them, is extrapolate the results to your scenario because everyone's will be different. And honestly, anyone who plays FPS...the "true gamers", will know what you pointed out. It's insanely obvious even the first time you play a demanding FPS MMPOG like BF3.

    I however, play single player 99% of the time. Only online FPS I'll play now is CS.

Log in

Don't have an account? Sign up now