This is something that initially caught me off-guard when I first realized it, but AMD historically hasn’t liked to talk about their GPU plans much in advance. On the CPU size we’ve heard about Carrizo and Zen years in advance. Meanwhile AMD’s competitor in the world of GPUs, NVIDIA, releases some basic architectural information over a year in advance as well. However with AMD’s GPU technology, we typically don’t hear about it until the first products implementing new technology are launched.

With AMD’s GPU assets having been reorganized under the Radeon Technologies Group (RTG) and led by Raja Koduri, RTG has recognized this as well. As a result, the new RTG is looking to chart a bit of a different course, to be a bit more transparent and a bit more forthcoming than they have in the past. The end result isn’t quite like what AMD has done with their CPU division or their competition has done with GPU architectures – RTG will talk about both more or less depending on the subject – but among several major shifts in appearance, development, and branding we’ve seen since the formation of the RTG, this is another way in which RTG is trying to set itself apart from AMD’s earlier GPU groups.

As part of AMD’s RTG technology summit, I had the chance to sit down and hear about RTG’s plans for their visual technologies (displays) group for 2016. Though RTG isn’t announcing any new architecture or chips at this time, the company has put together a roadmap for what they want to do with both hardware and software for the rest of 2015 and in to 2016. Much of what follows isn’t likely to surprise regular observers of the GPU world, but it none the less sets some clear expectations for what is in RTG’s future over much of the next year.

DisplayPort 1.3 & HDMI 2.0a: Support Coming In 2016

First and foremost then, let’s start with RTG’s hardware plans. As I mentioned before RTG isn’t announcing any new architectures, but they are announcing some of the features that the 2016 Radeon GPUs will support. Among these changes is a new display controller block, upgrading the display I/O functionality we’ve seen as the cornerstone of AMD’s GPU designs since GCN 1.1 was first launched in 2013.

The first addition here is that RTG’s 2016 GPUs will be including support for DisplayPort 1.3. We’ve covered the announcement of DisplayPort 1.3 separately in the past, where in 2014 the VESA announced the release of the 1.3 standard. DisplayPort 1.3 will introduce a faster signaling mode for DisplayPort – High Bit Rate 3 (HBR3) – which in turn will allow DisplayPort 1.3 to offer 50% more bandwidth than the current DisplayPort 1.2 and HBR2, boosting DisplayPort’s bandwidth to 32.4 Gbps before overhead.

DisplayPort Supported Resolutions
Standard Max Resolution
(RGB/4:4:4, 60Hz)
Max Resolution
(4:2:0, 60Hz)
DisplayPort 1.1 (HBR1) 2560x1600 N/A
DisplayPort 1.2 (HBR2) 3840x2160 N/A
DisplayPort 1.3 (HBR3) 5120x2880 7680x4320

The purpose of DisplayPort 1.3 is to offer the additional bandwidth necessary to support higher resolution and higher refresh rate monitors than the 4K@60Hz limit of DP1.2. This includes supporting higher refresh rate 4K monitors (120Hz), 5K@60Hz monitors, and 4K@60Hz with higher color depths than 8 bit per channel color (necessary for a good HDR implementation). DisplayPort’s scalability via tiling has meant that some monitor configurations have been possible even via DP1.2 by utilizing MST over multiple cables, however with DP1.3 it will now be possible to support those configurations in a simpler SST configuration over a single cable.

For RTG this is important on several levels. The first is very much pride – the company has always been the first GPU vendor to implement new DisplayPort standards. But at the same time DP1.3 is the cornerstone of multiple other efforts for the company. The additional bandwidth is necessary for the company’s HDR plans, and it’s also necessary to support the wider range of refresh rates at 4K necessary for RTG’s Freesync Low Framerate Compensation tech, which requires a 2.5x min:max ratio to function. That in turn has meant that while RTG has been able to apply LFC to 1080p and 1440p monitors today, they won’t be able to do so with 4K monitors until DP1.3 gives them the bandwidth necessary to support 75Hz+ operation.

Meanwhile DisplayPort 1.3 isn’t the only I/O standard planned for RTG’s 2016 GPUs. Also scheduled for 2016 is support for the HDMI 2.0a standard, the latest generation HDMI standard. HDMI 2.0 was launched in 2013 as an update to the HDMI standard, significantly increasing HDMI’s bandwidth to support 4Kp60 TVs, bringing it roughly on par with DisplayPort 1.2 in terms of total bandwidth. Along with the increase in bandwidth, HDMI 2.0/2.0a also introduced support for other new features in the HDMI specification such as the next-generation BT.2020 color space, 4:2:0 chroma sampling, and HDR video.

That HDMI has only recently caught up to DisplayPort 1.2 in bandwidth at a time when DisplayPort 1.3 is right around the corner is one of those consistent oddities in how the two standards are developed, but none the less this important for RTG. HDMI is not only the outright standard for TVs, but it’s the de facto standard for PC monitors as well; while you can find DisplayPort in many monitors, you would be hard pressed not to find HDMI. So as 4K monitors become increasingly cheap – and likely start dropping DisplayPort in the process – supporting HDMI 2.0 will be important for RTG for monitors just as much as it is for TVs.

Unfortunately for RTG, they’re playing a bit of catch-up here, as the HDMI 2.0 standard is already more than 2 years old and has been supported by NVIDIA since the Maxwell 2 architecture in 2014. Though they didn’t go into detail, I was told that AMD/RTG’s plans for HDMI 2.0 support were impacted by the cancelation of the company’s 20nm planar GPUs, and as a result HDMI 2.0 support was pushed back to the company’s 2016 GPUs. The one bit of good news here for RTG is that HDMI 2.0 is still a bit of a mess – not all HDMI 2.0 TVs actually support 4Kp60 with full chroma sampling (4:4:4) – but that is quickly changing.

FreeSync Over HDMI to Hit Retail In Q1’16
POST A COMMENT

99 Comments

View All Comments

  • BurntMyBacon - Thursday, December 10, 2015 - link

    @Samus: "GCN scales well, but not for performance. Fury is their future."

    Fury is GCN. Their issue isn't GCN as GCN is actually a relatively loose specification that allows for plenty of architectural leeway in its implementation. Also note that GCN 1.0, GCN 1.1, and GCN 1.2 are significantly different from each other and should not be considered a single architecture as you seem to take it.

    ATi's current issue is the fact that they are putting out a third generation of products on the same manufacturing node. My guess is that many of the architectural improvements they were working on for the 20nm chips can't effectively be brought to the 28nm node. You see a bunch of rebadges because they decided they would rather wait for the next node than spend cash that they probably didn't have on new top to bottom architecture updates to a node that they can't wait to get off of and probably won't recoup the expense for. They opted to update the high end where the expenses could be better covered and they needed a test vehicle for HBM anyways.

    On the other hand, nVidia, with deeper pockets and greater marketshare decided that it was worth the cost. Though, even they took their sweet time in bringing the maxwell 2.0 chips down to the lower end.
    Reply
  • slickr - Friday, December 11, 2015 - link

    Nvidia's products are based on pretty much slight improvements over their 600 series graphics architecture. They haven't had any significant architectural improvements since basically their 500 series. This is because both companies have been stuck on 28nm for the pat 5 years!

    Maxwell is pretty much a small update in the same technology that Nvidia has already been using before since the 600 series.
    Reply
  • Budburnicus - Wednesday, November 16, 2016 - link

    That is TOTALLY INCORRECT! Maxwell is a MASSIVE departure from Kepler! Not only does it achieve FAR higher clock speeds, but it does more with less!

    At GTX 780 Ti is effectively SLOWER than a GTX 970, even at 1080p where the extra memory makes no difference, and where the 780 Ti has 2880 CUDA cores, the 970 has just 1664!

    There are FAR too many differences to list, and that is WHY Kepler has not been seeing ANY performance gains with newer drivers! Because the programming for Kepler is totally different from Maxwell or Pascal!

    Also, now that Polaris and Pascal is released: LMFAO! The RX 480 cannot even GET CLOSE to the 1503 MHZ I have my 980 Ti running on air! And if you DO get it to 1400 it throws insane amounts of heat!

    GCN is largely THE SAME ARCHITECTURE IT HAS ALWAYS BEEN! It has seen incremental updates such as memory compression, better branch prediction, and stuff like the Primitive Discard Accelerator - but otherwise is TOTALLY unchanged on a functional level!

    Kind of like how Pascal is an incremental update to Maxwell, adding farther memory compression, Simultaneous Multi Projection, better branch prediction and so on. Simultaneous Multi Projection adds an extra 40% to 60% performance for VR and surround monitor setups, when Maxwell - particularly the GTX 980 and 980 Ti are already FAR better at VR than even the Fury X! Don't take my word for it, go check the Steam Benchmark results on LTT forums! https://linustechtips.com/main/topic/558807-post-y...

    See UNLIKE Kepler to Maxwell, Pascal is BASICALLY just Maxwell on Speed, a higher clocked Maxwell chip! And it sucks FAR less power, creates FAR less heat and provides FAR more performance, as the RX 480 is basically tied with a GTX 970 running 1400 core! And FAR behind a 980 at the same or higher!

    Meanwhile the GTX 1060 beats it with ease, while the GTX 1070 (which at even 2100 MHZ is just a LITTLE less powerful than the 980 Ti at 1500 MHZ) 1080, and Pascal Titan SHIT ALL OVER THE FURY X!

    Hell the GTX 980 regular at 1500 MHZ kicks the ASS of the Fury X in almost every game at almost every resolution!

    Oh and Maxwell as well as Pascal are both HDR capable.
    Reply
  • Furzeydown - Tuesday, December 8, 2015 - link

    Both companies have been rather limited by the same manufacturing node for the past four years as well though. It limits things to tweaks, efficiency improvements, and minor features. As far as performance goes, both companies are neck and neck with monster 600mm dies. Reply
  • ImSpartacus - Tuesday, December 8, 2015 - link

    But Nvidia's monster die is generally considered superior to amd's monster die despite using older memory tech. Furthermore, amd's monster die only maintains efficiency because it's being kept very chilly with a special water cooler.

    It's not neck and neck.
    Reply
  • Asomething - Wednesday, December 9, 2015 - link

    That is down to transistor density, amd are putting more into the same space which drives minimum requirements for the cooler up. Reply
  • Dirk_Funk - Wednesday, December 9, 2015 - link

    Neck and neck as in there's hardly a difference in how many frames are rendered per second. It's not like either one has any big advantages over the other, and they are made almost exclusively for gaming so if fps is the same then yes it is neck and neck as far as most people are concerned. Reply
  • OrphanageExplosion - Thursday, December 10, 2015 - link

    Not at 1080p and 1440p they aren't... Reply
  • RussianSensation - Wednesday, December 23, 2015 - link

    The reference cards are very close.

    1080p - 980Ti leads by 6.5%
    1440p - 980Ti leads by just 1.1%
    4k - Fury X leads by 4.5%

    Neither card is fast enough for 4K, while both are a waste of money for 1080p without using VSR/DSR/Super-sampling. That leaves 1440p resolution where they are practically tied.
    http://www.techpowerup.com/reviews/ASUS/R9_380X_St...

    The only reason 980Ti is better is due to its overclocking headroom. As far as reference performance goes, they are practically neck and neck as users above noted.
    Reply
  • zodiacsoulmate - Tuesday, December 8, 2015 - link

    waht do u mean nvidia is moving to a gcn-like architecture? Reply

Log in

Don't have an account? Sign up now