Fable Legends Early Preview: DirectX 12 Benchmark Analysis
by Ryan Smith, Ian Cutress & Daniel Williams on September 24, 2015 9:00 AM ESTUpdate 2016/03/07: Well so much for that. Fable Legends has been canceled. So it will ultimately be another game that gets to claim the right as the first Unreal Engine 4 based DX12 game.
DirectX 12 is now out in the wild as a part of Windows 10 and the updated driver model WDDM 2.0 that comes with it. Unlike DX11, there are no major gaming titles at launch - we are now waiting for games to take advantage of DX12 and what difference it will make for the game playing experience. One of the main focal points of DX12 is draw calls, leveraging multiple processor cores to dispatch GPU workloads, rather than the previous model of a single core doing most of the work. DX12 brings about a lot of changes with the goal of increasing performance, offering an even more immersive experience, but it does shift some of the support requirements to the engine developers such as SLI or Crossfire. We tackled two synthetic tests earlier this year, Star Swarm and 3DMark, but due to timing and other industry events, we are waiting for a better time to test the Ashes of the Singularity benchmark as the game nears completion. Until that point, a PR team got in contact with us regarding the upcoming Fable Legends title using the Unreal 4 engine, and an early access preview benchmark that came with it. Here are our results so far.
Fable Legends
Fable Legends is an Xbox One/Windows 10 exclusive free to play title built by Lionhead Studios in Unreal Engine 4. The game, styled as a ‘cooperative action RPG’, consists of asymmetrical multiplayer matches with attackers trying to raid a base and the defender playing more of a tower defense position.
The benchmark provided is more of a graphics showpiece than a representation of the gameplay, in order to show off the capabilities of the engine and the DX12 implementation. Unfortunately we didn't get to see any gameplay in this benchmark as a result, which would seem to focus more on combat. This is the one of the first DirectX 12 benchmarks available - Ashes of the Singularity by Stardock was released just before IDF, but due to scheduling we have not had a chance to dig into that one yet. This will be our first look at a DirectX 12 game engine with a game attached as a result.
Official Trailer
This benchmark pans through several outdoor scenes in a fashion similar to the Unigene Valley benchmark, focusing more on landscapes, distance drawing and tessellation rather than an upfront first-person perspective. Graphical effects such as dynamic global illumination are computed on the fly, making subtle differences in the lighting and it shows the day/night cycle being accelerated, similar to the large Grand Theft Auto benchmark. The engine itself draws on DX12 explicit features such as ‘asynchronous compute, manual resource barrier tracking, and explicit memory management’ that either allow the application to better take advantage of available hardware or open up options that allow developers to better manage multi-threaded applications and GPU memory resources respectively. The updated engine has had several additions to implement these visual effects and has promised that use of DirectX 12 will help to improve both the experience and performance.
The Test
The software provided to us is a prerelease version of Fable Legends, with early drivers, so ultimately the performance at this point is most likely not representative of the game at launch and should improve before release. What we will see here is more of a broad picture painting how different GPUs will scale when DX12 features are thrown into the mix. In fact, AMD sent us a note that there is a new driver available specifically for this benchmark which should improve the scores on the Fury X, although it arrived too late for this pre-release look at Fable Legends (Ryan did the testing but is covering Samsung’s 950 Pro launch in Korea at this time). It can underscore just how early in the game and driver development cycle DirectX 12 is for all players. But as with most important titles, we expect drivers and software updates to continue to drive performance forward as developers and engineers come to understand how the new version of DirectX works.
With that being said, there does not appear to be any stability issues with the benchmark as it stands, and we have had time to test graphics cards going back a few generations for both AMD and NVIDIA. Our pre-release package came with three test standards at 1280x720, 1920x1080 and 4K. We also attempted to test a number of these combinations multiple CPU core and thread count simulations in order to emulate a number of popular CPUs in the market.
CPU: | Intel Core i7-4960X in 3 modes: 'Core i7' - 6 Cores, 12 Threads at 4.2 GHz 'Core i5' - 4 Cores, 4 Threads at 3.8 GHz 'Core i3' - 2 Cores, 4 Threads at 3.8 GHz |
Motherboard: | ASRock Fatal1ty X79 Professional |
Power Supply: | Corsair AX1200i |
Hard Disk: | Samsung SSD 840 EVO (750GB) |
Memory: | G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26) |
Case: | NZXT Phantom 630 Windowed Edition |
Monitor: | Asus PQ321 |
Video Cards: | AMD Radeon R9 Fury X AMD Radeon R9 290X AMD Radeon R9 285 AMD Radeon HD 7970 NVIDIA GeForce GTX 980 Ti NVIDIA GeForce GTX 970 (EVGA) NVIDIA GeForce GTX 960 NVIDIA GeForce GTX 680 NVIDIA GeForce GTX 750 Ti |
Video Drivers: | NVIDIA Release 355.82 AMD Catalyst Cat 15.201.1102 |
OS: | Windows 10 |
This Test
All the results in this piece are on discrete GPUs. The benchmark outputs a score, which is merely the average frame rate multiplied by a hundred, but it also dumps an extensive data log where it tracks over 186 different elements of the system every frame, such as compute time for various effects for each frame. Our testing takes on three roles – direct GPU comparison of average frame rates at 1080p and 720p in our i7-4960X mode, CPU scaling at each resolution with the GTX 980 Ti and AMD Fury, X and then a deep analysis of the percentile data of these two graphics cards at each resolution and each CPU configuration.
141 Comments
View All Comments
TheJian - Saturday, September 26, 2015 - link
"There is a big caveat to remember, though. In power consumption tests, our GPU test rig pulled 449W at the wall socket when equipped with an R9 390X, versus 282W with a GTX 980. The delta between the R9 390 and GTX 970 was similar, at 121W. "You seem to see through rose colored glasses. At these kinds of watt differences you SHOULD dominate everything...LOL. Meanwhile NV guys have plenty of watts to OC and laugh. Your completely ignoring the cost of watts these days when talking a 100w bulb for hours on end for 3-7yrs many of us have our cards. You're also forgetting that most cards can hit strix speeds anyway right? NOBODY buys stock when you can buy an OC version from all vendors for not much more.
"Early tests have shown that the scheduling hardware in AMD's graphics chips tends to handle async compute much more gracefully than Nvidia's chips do. That may be an advantage AMD carries over into the DX12 generation of games. However, Nvidia says its Maxwell chips can support async compute in hardware—it's just not enabled yet. We'll have to see how well async compute works on newer GeForces once Nvidia turns on its hardware support."
Also seem to ignore that from your own link (techreport), they even state NV has async turned off for now. I'm guessing just waiting for all the DX12 stuff to hit, see if AMD can catch them, then boom, hello more perf...LOL.
https://techreport.com/review/28685/geforce-gtx-98...
"Thanks in part to that humongous cooler, the Strix has easily the highest default clock speeds of any card in this group, with a 1216MHz base and 1317MHz boost"
A little less than you say, but yes, NV gives you free room to run to WHATEVER your card can do in the allowed limit. Unlike AMD's UP TO crap, with NV you get GUARANTEED X, and more if available. I prefer the latter. $669 at amazon for the STRIX, so for $20 I'll take the massive gain in perf (cheapest at newegg is $650 for 980ti). I'll get it back in watts saved on electricity in no time. You completely ignore Total Cost of Ownership, not to mention DRIVERS and how RARE AMD drops are. NV puts out a WHQL driver monthly or more.
https://techreport.com/review/28685/geforce-gtx-98...
Any time you offer me ~15% perf for 3% cost I'll take it. If you tell me electric costs mean nothing, in the same sentence I'll tell you $20 means nothing then, on the price of card most live with for years.
Frostbite is NOT brand agnostic. Cough, Mantle, 8mil funding, Cough...The fact that MANY games run better in DX11 for Nv is just DRIVERS and time spent with DEVS (Witcher3, Project Cars etc, devs said this). This should be no surprise when R&D is down for 4yrs at AMD while the reverse is true at NV (who now spends more on R&D then AMD who has a larger product line).
Shocker ASHES looks good for AMD when it was a MANTLE engine game...ROFL. Jeez guy...Even more funny that once NV optimized for Star Swarm they had massive DX12 improvements and BEAT AMD in it, and not to mention the massive DX11 improvement too (which AMD ignored). Gamers should look at who has the funding to keep up in DX11 for a while too correct? AMD seems to have moved on to dx12 (not good for those poor gamers who can't afford new stuff right?). You seem to only see your arguments for YOUR side. Near as I can see, NV looks good until you concentrate where I will not play (1280x720, or crap cpus). Also, you're basing all your conclusions on BETA games and current state of drivers before any of this stuff is real...LOL. You can call unreal 4 engine unrealistic, but I'll remind you it is used in TONS of games over the last two decades so AMD better be good here at some point. You can't repeatedly lose in one of the most prolific engines on the planet right? You can't just claim "that engine is biased" and ignore the fact that it is REALITY that it will be used a LOT. If all engines were BIASED towards AMD, I would buy AMD no matter what NV put out if AMD wins everything...ROFL. I don't care about the engine, I care about the result of the cards running on the games I play. IF NV pays off every engine designer, I'll buy NV because...well, DUH. You can whine all you want, but GAMERS are buying 82% NV for a reason. I bought INTEL i7 for a REASON. I don't care if they cheat, pay someone off, use proprietary tech etc, as long as they win, I'll buy it. I might complain about the cheating, but if it wins, I'll buy it anyway...LOL.
IE, I don't have to LIKE Donald Trump to understand he knows how to MAKE money, unlike most of congress/Potus. He's pretty famous for FIRING people too, which again, congress/potus have no idea how to get done apparently. They also have no idea how to manage a budget, which again, TRUMP does. They have no idea how to protect the border, despite claiming they'll do it for a decade or two. I'll take that WALL please trump (which works in israel, china, etc), no matter how much it costs compared to decades of welfare losses, education dropping, medical going to illegals etc. The wall is CHEAP (like an NV card over 3-7yrs of usage at 120w+ or more savings as your link shows). I can hate trump (or Intel, or NV) and still recognize the value of his business skills, negotiation skills, firing skills, budget skills etc. Get it? If ZEN doesn't BURY Intel in perf, I'll buy another i7 for my dad...LOL.
http://www.anandtech.com/show/9306/the-nvidia-gefo...
Even anandtech hit strix speeds with ref. Core clocks of 250mhz free on 1000mhz? OK, sign me up. 4 months later likely everything does this or more as manufacturing only improves over time. All of NV cards OC well except for the bottom rungs. Call me when AMD wins where most gamers play (above 720P and with good cpus). Yes DX12 bodes well for poor people, and AMD's crap cpus. But I'm neither. Hopefully ZEN fixes the cpu side so I can buy AMD again. They still have a shot at my die shrunk gpu next year too, but not if they completely ignore DX11, keep failing to put out game ready drivers, lose the watt war etc. ZEN's success (or not) will probably influence my gpu sale too. If ZEN benchmarks suck there will probably be no profits to make my gpu drivers better etc. Think BIGGER.
anubis44 - Friday, October 30, 2015 - link
As already mentioned, nVidia pulled out the seats, the parachutes and anything else they could unscrew and thew them out of the airplane to lighten the load. Maxwell's low-power usage comes at a price, like no hardware based scheduler, and now DX12 games will frequently make use of this for context switching and dynamic reallocation of shaders between rendering and compute. Why? Because the XBOX One and the PS4, having AMD Radeon graphics CGN cores, can do this. So in the interest of getting the power usage down, nVidia left out a hardware feature even the PS4 and XBOX One GPUs have. Does that sound smart? It's called 'marketing': "Hey look! Our card uses LESS POWER than the Radeon! It's because we're using super-duper, secret technologies!" No, you're leaving stuff off the die. No wonder it uses less power.RussianSensation - Thursday, September 24, 2015 - link
925mhz HD7970 is beating GTX960 by 32%. R9 280X currently sells for $190 on Newegg and it has another 13.5% increase in GPU clocks, which implies it would beat 960 by a whopping 40-45%!R9 290X beating 970 by 13% in a UE4 engine is extremely uncharacteristic. I can't recall this ever happen. Also, other sites are showing $280 R9 390 on the heels of the $450 GTX980.
http://www.pcgameshardware.de/DirectX-12-Software-...
That's an extremely bad showing for NV in each competing pricing segment, except for the 980Ti card. And because UE4 has significantly favoured NV's cards under DX11, this is actually a game engine that should have favoured NV's Maxwell as much as possible. Now imagine DX12 in a brand agnostic game engine like CryEngine or Frostbite?
At the end it's not going to matter to gamers who upgrade every 2 years but for budget gamers who cannot afford to do so, they should pay attention.
CiccioB - Friday, September 25, 2015 - link
Ahahahah.. and that should prove that? A chip twice ad big and consuming twice the energy can perform 32% more than another?
Oh, sorry, you were speaking about prices... yes... so you are just claiming that that power sucking beast has hard time selling like the winning micro hero that is filing nvidia's pokets while the competing can only be obtained when a stock cleaning operation is done?
Can't really understand these kind of comparisons. GTX960 runs against Radeon 285 or now 380 card. It performs fantastically for the size of its die and the power it sucks. And has pretty cornered AMD margins on boards that mount beefy GPU like Tahiti or Tonga.
The only hope for AMD to come out of this pitiful situation is to hope that with next generation and new PP perfomance/die space ratios are closer to competition, or they won't go to gain a singe cent out of graphics division for a few years again.
The_Countess - Friday, September 25, 2015 - link
ya you seem to have forgotten that the hd7970 is 3+ years old while the gtx960 was released this year. and it has only 30% more transistors (~4.3billion vs ~3)and the only reason nvidia's power consumption is better is because they cut double precision performance on all their cards down to nothing.
MapRef41N93W - Saturday, September 26, 2015 - link
So wrong it's not even funny. Maybe you aren't aware of this, but small die Kepler already had DP cut. Only GK100/GK110 had full DP with Kepler. That has nothing to do with why GM204/206 have such low power draw. The Maxwell architecture is the main reason.Azix - Saturday, September 26, 2015 - link
cut hardware scheduler?Asomething - Sunday, September 27, 2015 - link
Sorry to burst your bubble but nvidia didnt cut DP completely on small keplar, they cut down some from fermi but disabled the rest so they could keep DP on their Quadro series, there were softmods to unlock that DP, for maxwell they did actually completely cut DP to save on die space and power consumption. amd did the same for GCN1.2's fiji in order to get it on 28nm.CiccioB - Monday, September 28, 2015 - link
I don't really care how old is Tahiti. I know it was used as comparison with a chip which is half its size and power consumption ON THE SAME PP. So how old it is doesn't really matter. Same PP, so what's should be important is how good both architectures are.What counts is that AMD has not done anything radical to improve its architecture. It replaced Tahiti with a similar beefy GPU, Tonga, which didn't really stand a chance against Maxwell. They were the new proposal of both companies. Maxwell vs GCN 1.2. See the results.
So again, go and look at how big GM206 is and how much power it sucks. Then compare with Tonga and the only thing you can see as similar is the price. nvidia solution beats AMD one under all points of view bringing AMD margins to nothing, though nvidia is still selling its GPU at a higher price than it really deserves.
In reality one should compare Tahiti/Tonga with GM204 for the size and power consumption. The results will simply put AMD GCN architecture into the toilet. Only reasonable move was to lower the price so much that they could sell a higher tier GPU into a lower series of boards.
Performance based on die space and power consumption doesn't really make GCN a hero in nothing but in having worsened AMD position even more with respect to old VLIW architecture were AMD fought with similar performances but smaller dies (and power consumption).
CiccioB - Monday, September 28, 2015 - link
Forgot.. about double precision... I still don't care about it. Do you use it in your everyday life? How many professional boards is AMD selling that justifies the use of DP units into such GPUs?Just for numbers on the well painted box? So DP is a no necessity for 99% off the users.
And apart that stupid thing, nvidia DP units were not present on GK204/206 as well, so the big efficiency gain has been made by improving their architecture (from Kepler to Maxwell) while AMD just moved from GCN 1.0 to GCN 1.2 with almost null efficiency results.
The problem is not DP units present or not. It is that AMD could not make its already struggling architecture better in absolute with respect to the old version. An with Fiji they demonstrated that they could even do worse, if someone had any doubts.