AMD Shows Off Dual-GPU Fiji Card At PC Gaming Show
by Ryan Smith on June 17, 2015 8:00 AM ESTBriefly announced and discussed during AMD’s 2015 GPU product presentation yesterday morning was AMD’s forthcoming dual Fiji video card. The near-obligatory counterpart to the just-announced Radeon R9 Fury X, the unnamed dual-GPU card will be taking things one step further with a pair of Fiji GPUs on a single card.
Meanwhile as part of yesterday evening’s AMD-sponsored PC Gaming Show, CEO Dr. Lisa Su took the stage for a few minutes to show off AMD’s recently announced Fury products. And at the end this included the first public showcase of the still in development dual-GPU card.
There’s not too much to say right now since we don’t know its specifications, but of course for the moment AMD is focusing on size. With 4GB of VRAM for each GPU on-package via HBM technology, AMD has been able to design a dual-GPU card that’s shorter and simpler than their previous dual-GPU cards like the R9 295X2 and HD 7990, saving space that would have otherwise been occupied by GDDR5 memory modules and the associated VRMs.
Meanwhile on the card we can see that it uses a PLX 8747 to provide PCIe switching between the two GPUs and the shared PCIe bus. And on the power delivery side the card uses a pair of 8-pin PCIe power sockets. At this time no further details are being released, so we’ll have to see what AMD is up to later on once they’re ready to reveal more about the video card.
133 Comments
View All Comments
Urizane - Wednesday, June 17, 2015 - link
Saying 4GB of VRAM on a card like this would be fast while the other 4GB would be slow is like saying NUMA doesn't work. There's plenty of existing application code that has dealt with utilizing separate pools of memory attached to different processors to complete the same task successfully. Essentially, all we need is a framework similar to NUMA that works on DX12 and we'll have/eat all of the 8GB cake.extide - Wednesday, June 17, 2015 - link
It doesnt work at all like NUMA -- the card's won't be accessing each other's memory -- the PCIe bus is just not fast enough. They will need all textures local just like they do now, so yeah while each card's memory is individually addressable -- most of the data will still have to be duplicated anyways.sabrewings - Saturday, June 20, 2015 - link
The caveat to this is that it requires developer implementation. DX12 is putting developers much closer to the actual silicon, and features like asynchronous GPUs will require developers to make use of it. That's why I've always felt dual GPU cards are gimmicky. It's a lot of money and power to run the risk of a lack of software support leaving you with effectively a single GPU.JMC2000 - Wednesday, June 17, 2015 - link
With my Antec 300, I can mount the radiator either at the back fan emplacement near the CPU, on the side panel fan area, above the CPU in the 140mm fan space, or at the front of the case with a bit of modding.Unless you have a smaller case, or just don't want to change fans, I don't see how placing the rad is a problem.
extide - Wednesday, June 17, 2015 - link
Then buy an air cooled fury x DONE. WHy are you bitching about something that is a NON ISSUE?YazX_ - Wednesday, June 17, 2015 - link
There is nothing called future proof, things are moving extremely fast, these GPUs will dry out way before 4k becomes mainstream and playable on mid-range cards, for now 4GB Vram is enough, though 6GB is the sweet spot but 8GB is useless , and still we dont know how game devs are moving with DX12, if they used it properly then 2GB Vram will be plenty to load very high resolution textures with Tiled Resources.Jtaylor1986 - Wednesday, June 17, 2015 - link
And yet they though 8GB of ram was important enough to put on the 390 (X) even though it costs more when margins are thin and generates and power when they are at the limit with both.SonicKrunch - Wednesday, June 17, 2015 - link
DX12 will allow this card to use all 8GB on board...CPUGPUGURU - Wednesday, June 17, 2015 - link
The very definition of making the best future proofing buying decision with the tech available now is more than 4GB of VRAM for 4K gaming which AMD missed the boat on with their HBM1.NO 28nm GPU in the world needs all the bandwidth the HBM1 provides, HBM1 is a total waste of bandwidth on 28nm GPUs. BUT what 4K gaming does need is more than 4GB, high end cards need to have more than 4GB of VRAM to even be considered future proof.
extide - Wednesday, June 17, 2015 - link
This GPU needs the b/w so .. your theory is incorrect -- and anyways the manufacturing process doesnt really have much to do with it. I mean clearly the engineers who did the math figured that 512GB a sec is required for 4096 shaders, and 512GB a sec would be pretty tough to get on GDDR5 -- I mean you would need a wide bus going really fast and that would be expensive, require complex pcb's with more layers and use more power. The HBM is a very elegant solution to all of that.