A Quick Note on Architecture & Features

With pages upon pages of architectural documents still to get through in only a few hours, for today’s launch news I’m not going to have the time to go in depth on new features or the architecture. So I want to very briefly hit the high points on what the major features are, and also provide some answers to what are likely to be some common questions.

Starting with the architecture itself, one of the biggest changes for RDNA is the width of a wavefront, the fundamental group of work. GCN in all of its iterations was 64 threads wide, meaning 64 threads were bundled together into a single wavefront for execution. RDNA drops this to a native 32 threads wide. At the same time, AMD has expanded the width of their SIMDs from 16 slots to 32 (aka SIMD32), meaning the size of a wavefront now matches the SIMD size. This is one of AMD’s key architectural efficiency changes, as it helps them keep their SIMD slots occupied more often. It also means that a wavefront can be passed through the SIMDs in a single cycle, instead of over 4 cycles on GCN parts.

In terms of compute, there are not any notable feature changes here as far as gaming is concerned. How things work under the hood has changed dramatically at points, but from the perspective of a programmer, there aren’t really any new math operations here that are going to turn things on their head. RDNA of course supports Rapid Packed Math (Fast FP16), so programmers who make use of FP16 will get to enjoy those performance benefits.

With a single exception, there also aren’t any new graphics features. Navi does not include any hardware ray tracing support, nor does it support variable rate pixel shading. AMD is aware of the demands for these, and hardware support for ray tracing is in their roadmap for RDNA 2 (the architecture formally known as “Next Gen”). But none of that is present here.

The one exception to all of this is the primitive shader. Vega’s most infamous feature is back, and better still it’s enabled this time. The primitive shader is compiler controlled, and thanks to some hardware changes to make it more useful, it now makes sense for AMD to turn it on for gaming. Vega’s primitive shader, though fully hardware functional, was difficult to get a real-world performance boost from, and as a result AMD never exposed it on Vega.

Unique in consumer parts for the new 5700 series cards is support for PCI Express 4.0. Designed to go hand-in-hand with AMD’s Ryzen 3000 series CPUs, which are introducing support for the feature as well, PCIe 4.0 doubles the amount of bus bandwidth available to the card, rising from ~16GB/sec to ~32GB/sec. The real world performance implications of this are limited at this time, especially for a card in the 5700 series’ performance segment. But there are situations where it will be useful, particularly on the content creation side of matters.

Finally, AMD has partially updated their display controller. I say “partially” because while it’s technically an update, they aren’t bringing much new to the table. Notably, HDMI 2.1 support isn’t present – nor is more limited support for HDMI 2.1 Variable Rate Refresh. Instead, AMD’s display controller is a lot like Vega’s: DisplayPort 1.4 and HDMI 2.0b, including support for AMD’s proprietary Freesync-over-HDMI standard. So AMD does have variable rate capabilities for TVs, but it isn’t the HDMI standard’s own implementation.

The one notable change here is support for DisplayPort 1.4 Display Stream Compression. DSC, as implied by the name, compresses the image going out to the monitor to reduce the amount of bandwidth needed. This is important going forward for 4K@144Hz displays, as DP1.4 itself doesn’t provide enough bandwidth for them (leading to other workarounds such as NVIDIA’s 4:2:2 chroma subsampling on G-Sync HDR monitors). This is a feature we’ve talked off and on about for a while, and it’s taken some time for the tech to really get standardized and brought to a point where it’s viable in a consumer product.

AMD Announces Radeon RX 5700 XT & RX 5700 Addendum: AMD Slide Decks
Comments Locked

326 Comments

View All Comments

  • Dribble - Tuesday, June 11, 2019 - link

    Well that's wrong - the features have only been out for months and there are already several games using them extensively with significantly better visuals as a result. That's way better the adoption rate of most DX's. In addition the early adoption has pushed ray tracing into the next gen consoles in some form - you can bet if Nvidia hadn't released RTX they would have none. Now AMD is scrambling to put something in Navi 2 (or whatever consoles have) as the console makers are both demanding it.

    You can argue the performance isn't there yet (same for pretty well every major new graphics feature) but you can't really argue that RTX hasn't hit the ground running and had a pretty big impact.
  • Korguz - Tuesday, June 11, 2019 - link

    too bad.. that impact.. is on everyones wallet Dribble :-) :-)
  • Beaver M. - Tuesday, June 11, 2019 - link

    Would still pick the 1080Ti over any Turing, because ot makes much more sense, even now still.
    But Nvidia was smart enough to axe it as the only Pascal one. They knew their 2080 was too crappy to leave it on the market.
  • jabbadap - Tuesday, June 11, 2019 - link

    How about virtuallink? Does it have that, is it on tdp or is that card power only like nvidia?
  • akyp - Tuesday, June 11, 2019 - link

    The regression in perf/$ is simply disgusting. When my 970 gives up I might as well go APU and hope Google Stadia is actually good. Too bad the Ryzen APUs are a full generation behind.
  • zodiacfml - Tuesday, June 11, 2019 - link

    About to say the same thing but remember these cards are comparable to Vega cards, not the RX 480/580/590
  • RavenRampkin - Tuesday, June 11, 2019 - link

    I liked what I saw. In contrast with the $749 Ryzen part (even thought that's revolutionary stuff right there, our wallets are still doomed -_-). Don't take the hype train bait and it'll be twice as difficult to disappoint you. Call me an AMDtard fanboy I don't mind ¯\_(ツ)_/¯

    (on the topic of all the megafeaturez: not believing in wide future adoption of all those DLSSes doesn't make a person a fanboy. That's some Elon Muscus level shtick imo. Same way, you'd probably call me a fanboy for bemoaning the low popularity of numerous other -- open-source -- RTG goodies. Looks like AMD decided not to bemoan any longer and go freestyle mode. No matter the performance and competition. Meh? Meh. But product quality isn't always measured in ad banner space and use in pre-builts, winkity wink to the "yarr Vega sux!!!111" gang and RTX's Witnesses. Vega was never bad and is certainly no worse than on launch at its current prices: 56 starting at $200 used, $270 new, 64 $270 used, $330 new. (Lithuania used market, U.K. stores for new units) Don't want Vega or Pascal or Polaris? The world of RTXes, where the 2060 and 2070 just barely, and with a lot of effort, reached 1060 and 1070 MSRP, in select stores only -- while the 2080 (Ti) is still cosmic -- is waiting for you. /offtop)
  • nils_ - Tuesday, June 11, 2019 - link

    Disappointed to see that they still can't keep up with high end NVidia cards. I'm planning to get a new workstation / gaming rig and I would have liked to go fuill AMD, but for the gaming part I would still want an NVidia card. This leaves me with few options since I mostly use Linux and boot into Windows for gaming, there is no good Linux driver for NVidia (only their binary release that's a pain in the ass to use).

    These are my options:
    1) Go full Intel (i9900KS) + NVidia (RTX2080 Ti or successor) - Excellent graphics support under Linux with the iGUP, excellent graphics performance
    2) Go mixed: High End Ryzen (3950X), get an RTX2080 Ti and a low-end GPU for Linux, possibly sacrificing a few PCIe lanes in the process

    Price wise the latter one is probably more expensive. Or I could wait for Intel to release a new Desktop CPU...
  • scineram - Tuesday, June 11, 2019 - link

    What about Radeon VII?
  • nils_ - Tuesday, June 11, 2019 - link

    Great suggestion, that might work as well, though not as fast as the RTX 2080Ti it's probably fast enough. I'm wondering about the power use in a Desktop scenario though, that's usually better with the iGPU (and disabling the dGPU).

Log in

Don't have an account? Sign up now