Discussion of the next generation of DDR memory has been aflutter in recent months as manufacturers have been showcasing a wide variety of test vehicles ahead of a full product launch. Platforms that plan to use DDR5 are also fast approaching, with an expected debut on the enterprise side before slowly trickling down to consumer. As with all these things, development comes in stages: memory controllers, interfaces, electrical equivalent testing IP, and modules. It’s that final stage that SK Hynix is launching today, or at least the chips that go into these modules.

DDR5 is the next stage of platform memory for use in the majority of major compute platforms. The specification (as released in July 2020) brings the main voltage down from 1.2 V to 1.1 V, increases the maximum silicon die density by a factor 4, doubles the maximum data rate, doubles the burst length, and doubles the number of bank groups. Simply put, the JEDEC DDR specifications allows for a 128 GB unbuffered module running at DDR5-6400. RDIMMs and LRDIMMs should be able to go much higher, power permitting.

JEDEC DDR Generations
  DDR5 DDR4 DDR3 LPDDR5
Max Die Density 64 Gbit 16 Gbit 4 Gbit 32 Gbit
Max UDIMM Size
(DSDR)
128 GB 32 GB 8 GB N/A
Max Data Rate 6.4 Gbps 3.2 Gbps 1.6 Gbps 6.4Gbps
Channels per Module 2 1 1 1
Total Width
(Non-ECC)
64-bits
(2x32-bit)
64-bits 64-bits 16-bits
Banks
(Per Group)
4 4 8 16
Bank Groups 8/4 4/2 1 4
Burst Length BL16 BL8 BL8 BL16
Voltage (Vdd) 1.1v 1.2v 1.5v 1.05v
Vddq 1.1v 1.2v 1.5v 0.5v

There are four angles in the world of DDR that everyone involved in the specification wants to iterate on. Capacity is the obvious one, but also memory bandwidth plays a key role in performance scaling of common multi-core workloads in the large core-count servers we are seeing. The other two are power (an obvious goal), and the other is latency, another key metric for performance.

With DDR5, one of the major changes to help drive this is the way the memory is seen by the system. Rather than being a single 64-bit data channel per module, DDR5 is seen as two 32-bit data channels per module (or 40 bits in ECC). The burst length has doubled, meaning that each 32-bit channel will still deliver 64 bytes per operation, but can do so in a more interleaved fashion. That means the standard ‘two 64-bit channel DDR4’ system will morph into a ‘quad 32-bit channel DDR5’ arrangement, although each memory stick provides a total of 64-bits but in a more controllable way. This also makes doubling the data rate, a key element in increasing peak bandwidth, easier, as well as a finer-grained bank refresh feature, which allows for asynchronous operations on the memory while it is in use, reducing latency.

Voltage regulation is also being moved from the motherboard to the memory module, allowing the module to regulate its own needs. We already saw DDR4 adopt a per-chip Vdroop control, but this takes the whole idea a stage further for tighter power control and management. It also puts power management in the hands of the module vendor rather than the motherboard manufacturer, allowing the module manufacturer to size up what is required for faster memory – it will be interesting to see how different firmware cope with non-JEDEC standard gaming memory that will undoubtedly go above specification.

SK Hynix’s announcement today is that they are ready to start shipping DDR5 ECC memory to module manufacturers – specifically 16 gigabit dies built on its 1Ynm process that support DDR5-4800 to DDR5-5600 at 1.1 volts. With the right packaging technology (such as 3D TSV), SK Hynix says that partners can build 256 GB LRDIMMs. Additional binning of the chips for better-than-JEDEC speeds will have to be done by the module manufacturers themselves. SK Hynix also appears to have its own modules, specifically 32GB and 64GB RDIMMs at DDR5-4800, and has previously promised to offer memory up to DDR5-8400.

SK Hynix has not provided information of the sub-timings of these modules. The JEDEC specification defines three different modes for DDR5-4800:

  • DDR5-4800A: 34-34-34
  • DDR5-4800B: 40-40-40
  • DDR5-4800C: 42-42-42

It is unclear which one of these that SK Hynix is using. The module says '4800E', however that appears to just be part of the module naming, as the JEDEC specification doesn't go beyond a CL value of 42 for DDR5-4800.

For bandwidth, other memory manufacturers have quoted that for the theoretical 38.4 GB/s that each module of DDR5-4800 can bring, they are already seeing effective numbers in the 32 GB/s range. This is above the effective 20-25 GB/s per channel that we are seeing on DDR4-3200 today. Other memory manufacturers have already announced that they are sampling DDR5 with customers since the beginning of the year.

As part of the announcement, it was interesting to see Intel as one of the lead partners for these modules. Intel has committed to enabling DDR5 on its Sapphire Rapids Xeon processor platform, due for initial launch in late 2021/2022. AMD was not mentioned with the announcement, and neither were any Arm partners.

SK Hynix quotes that DDR5 is expected to be 10% of the global market in 2022, increasing to 43% in 2024. The intersection point for consumer platforms is somewhat blurred at this point, as we’re probably only half-way through (or less than half) of the DDR4 cycle. Traditionally we expect a cost interception between old and new technology when they are equal in market share, however the additional costs in voltage regulation that DDR5 requires is likely to drive up module costs – scaling from standard power delivery on JEDEC modules up to a beefier solution on the overclocked modules. It should however make motherboards cheaper in that regard.

Source: SK hynix

Related Reading

 

Comments Locked

42 Comments

View All Comments

  • Infy2 - Tuesday, October 6, 2020 - link

    I wonder if effective memory latency will go up or down compared to DDR4. Aside from theoretical memory bandwidth tests, real-world workloads (such as games), seem to benefit more from lower memory latency than higher bandwidth.
  • Ian Cutress - Tuesday, October 6, 2020 - link

    These are JEDEC specification server modules. So sub-timings are higher than a consumer binned chip. We won't know what they'll get until the consumer module makers start playing around with binning. JEDEC is conservative to help with power, manufacturing, and compliance. The slower you go, the easier it is to stay in the lines :)
  • PeachNCream - Tuesday, October 6, 2020 - link

    Bandwidth improvements alone are going to be very helpful on the consumer side for people that rely on system memory to for a computer's iGPU, but latency improvements are also important. Lots of iGPUs have the raw computational power to handle fairly demanding graphical needs, but have always been held back by the lack of bandwidth and responsiveness of RAM. I'm excited by the potential DDR5 will offer in that regard.
  • poorAPUguy - Tuesday, October 6, 2020 - link

    The article doesnt seem to paint a very hopeful picture in that regard, however... It was said that for DDR4 3200MHz vs DDR5 MHz, the latter would have 30% more bandwidth simply from the new design. So why is it that now DDR5 4800MHz should have only a 30% bandwidth uplift from DDR4 3200MHz when it has an additional 50% clock speed?
  • benedict - Tuesday, October 6, 2020 - link

    You don't understand how memory works. There is no such thing as DDR4 3200MHz.
    DDR4 3200 works at 200MHz. DDR5 4800 works at 150MHz. The number signifies maximum theoretical bandwidth which is rarely used in real-time applications.
  • poorAPUguy - Tuesday, October 6, 2020 - link

    So you're saying that this generation we should expect 30% real world bandwidth improvements and not the 2x that has been plastered everywhere. Got it. Thanks.
  • Wilco1 - Sunday, October 18, 2020 - link

    No. Because it is more efficient, DDR5 gives 35% higher achieved bandwidth at the same data rate as DDR4. According to https://cdn.mos.cms.futurecdn.net/XixTkXjLK2pmiu47... DDR5-4800 has 1.87 times the bandwidth of DDR4-3200. So we can expect huge improvements in performance, particularly in servers with lots of cores.
  • proflogic - Wednesday, October 21, 2020 - link

    Heh, DDR4-3200 operates at 1600 MHz, not 200 MHz. You would have to go back a good number of years to be back at 200 MHz. The 3200 here represents 3200 MT/s (i.e. a single pin transfers 3200 Mbps), and since this is dual data rate, with 2 bits per clock, the clock frequency is half.

    There's a different marketing nomenclature that sometimes gives total bandwidth figures, but DDR4 really is doing GHz frequencies on the motherboard. And now, for DDR5, we're looking at single-ended buses operating at up to 4.2 GHz. Those must be some super tight layout guidelines...
  • TeXWiller - Tuesday, October 6, 2020 - link

    The future legendary DDR5 frame-rate smoothness on multi-core systems shall be a key selling point for the module manufacturers. The more mellow RGB lighting effects of those modules will portray the quality of life, free from the stresses of memory walls and unnecessary queuing while allowing for more relaxed, unconstrained experience and the free breathing of a plurality of memory controllers. I'd expect nothing less. ;)
  • godrilla - Wednesday, October 7, 2020 - link

    Currently we are at a plateau with DDR4 ram and the Gskill neo 3800 cl14 is the best money can buy.

Log in

Don't have an account? Sign up now