SK Hynix today announced that they’ve begun sampling of its first ever PCIe 4.0 enterprise SSDs in the form of the new 96-layer 3D-NAND U.2/U.3 form-factor PE8010 and PE8030 eSSDs, as well as announcing plans to sample the new PE81111 EDSFF E1.L SSDs based on their 128-layer “4D NAND” flash modules later in the year.

We had expected the new PE8111 eSSD for some time know as we reported about SK Hynix’s plans to introduce such a product last November. The biggest change here is the company’s use of new 128-layer 3D NAND modules that the company dubs as “4D-NAND” because of a new denser cell structure design and higher per-die I/O speeds.


16TB Enterprise EDSFF E1.L SSD

The PE8111 still retains as PCIe 3.0 interface and its corresponding performance characteristics plateau at 3400MB/s sequential reads and 3000MB/s sequential writes – whilst supporting random reads and writes up to respectively 700K and 100K IOPs. Because it’s a long-factor EDSFF E1.L form-factor, storage capacity for the unit falls in at 16TB, and SK Hynix is reporting that they’re working on a 32TB solution in the future.

The new PE8010 and PE8030 come in an U.2/U.3 form-factor and are the company’s first SSDs support PCIe 4.0. The SSDs here still rely on 96-layer NAND modules from the company – but are using an in-house controller chip. Bandwidth here is naturally higher, reaching up to 6500MB/s reads and 3700MB/s write sequentially, with random IOPs falling in at respectively 1100K for reads and 320K for writes.

Power consumption for the new U.2/U.3 drives is actually extremely competitive given their jump to PCIe 4.0 – rising only up to 17W as opposed to their previous generation PCIe 3.0 products which fell in at 14W. This is likely to be attributed to the new generation custom controller, which might be more optimised for low-power compared some or the early third-party 4.0 controllers out there.

The PE8010 and PE8030 are sampling right now with customers – with the PE8111 planned to be sampled in the second half of the year.

Related Reading:

Source: SK Hynix

Comments Locked

40 Comments

View All Comments

  • sa666666 - Thursday, April 9, 2020 - link

    It's obvious you have a huge chip on your shoulder. Looks like another HStewart in the making.
  • schujj07 - Thursday, April 9, 2020 - link

    There is absolutely no way that SSD can put competitive pressure on DRAM. I stated earlier that even Optane, the fastest SSD & can be used as NVDIMMs, is still only able to be used as a caching layer as an NVDIMM between RAM and normal Storage. Even at 60 DWPD endurance, NVDIMM Optane would burn through its R/W cycles very quickly if it were the RAM in a system. Yes the greater the amount of RAM the longer it would take, but that is only slowing the the inevitable. Not to mention if it was your only RAM it would TANK your performance. Think about putting SDRAM into a modern system and see what would happen to your performance. Bandwidth is important but latency is the biggest issue and SDRAM has far lower latency than Optane.

    Lets get into your PCIe 4/5 VS DDR4/5 argument. The fastest SSD will typically use an x4 bus, some like the Samsung P1725a use a PCIe 3 x8 in a HHHL for enterprise usage, however, that is still only the bandwidth of PCIe 4 x4. PCIe 4 x4 has 8GB/s max theoretical bandwidth and PCIe 5 will double that to 16GB/s. Even the non-existent PCIe 5 x4 SSD has lower max bandwidth than the slowest DDR4 RAM which is DDR4-2133, 16GB/s vs 17GB/s. Modern CPUs all use dual-channel RAM so that makes the bandwidth for the system minimum 34GB/s but if we figure that most computer use at least dual 2666MHz now that makes it 42.5GB/s. Again this is sheer bandwidth numbers. Also all those 5.5GB/s numbers you see on SSD are sequential data transfers at a queue depth of 64 or 128.

    I did read and think about your "if SSD performance keeps improving like this, it can provide competitive pressure to DRAM which is still controlled by a price fixing triad" comment. I dismissed the price fixing triad part and focused on the SSD performance & competitive pressure part. Your statement about it being able to provide competitive pressure is so blatantly wrong it is comical. I already went over the numbers earlier in the post so I won't rehash that here. With the PS5/XBX we will see better loading times and that's about it with the faster storage. If the storage was running at the speed of RAM instead of 0.1 sec loading time it would be 0.0001 sec. However, if we were to replace the RAM speed with something equal to the SSD performance, we would see the PS4/XBX perform like the PS2/Xbox. The only applications that are sped up by faster storage are applications that are already storage speed limited, ie loading times in games. The reason is the data is stored on the drive and having a faster drive means the data gets sent to the CPU faster. I know this advantage very well. When the data center I mange upgraded from an 8Gb Fibre Channel SAN with 10k spinning disks to a dual port 25GbE iSCSI software defined storage (SDS) array with NVMe disk, loading times for SAP HANA DBs decreased dramatically. On the old SAN it would take 20 minutes to start a 300GB RAM DB, FYI HANA is an in RAM DB and for every GB of disk needed for the DB you need the same amount of RAM. Now with the SSD backed SDS array that same DB loads in 2 minutes.

    Your berate me on the network claim but your entire post on that was nonsensical. I assumed you were talking about a home network, but it was hard to follow the post. "have u heard of this thing called "network", is it possible that some1 might not care to use RAM or SSD that is faster than his network (in bw or latency) but does care not use storage that is slower than his network." The only thing someone can figure is that you are talking about a home network and not a SAN. As I had said even a slow HDD will be plenty for 1GbE network. Say you are a video editor, are you going to be doing the editing over your network onto some NAS or will you be editing the file locally? The answer is locally and then transfer the file to the NAS for long term storage/backup. Since most home have 1GbE network and the transfers will be sequential in design, HDD storage will be fast enough for that network speed. You are so severely network limited at 1Gb that using an SSD on 1GbE network is just a stupid waste of money. If you are lucky enough to have the money for 5GbE or 10GbE network, then yes SSD would work or having a RAID 50 array would make sense.

    I think at this point I have covered almost all your incorrect talking points. FYI just because you say "he is dead wrong, I am right" doesn't make you right. You might not have liked my reply about how much slower SSD is to DRAM, but it was a counter argument. I didn't think I needed to provide you with hard numbers, however, by your comment "just an education reply is what he gave. and w/- facts BTW" it is obvious I should have. I figured it was common knowledge the performance difference between DRAM and SSD. However, I should have know that common knowledge isn't very common.

    In case you didn't get it from the middle part of my post, I actually do know what I am talking about as this is my profession. Please think before you post something so utterly ridiculous as "if SSD performance keeps improving like this, it can provide competitive pressure to DRAM" and then dismiss a legitimate counter post. FYI there are tech experts who visit this site to learn about the new things coming out and see reviews of products to help us make our decisions for the stuff we manage.
  • azfacea - Friday, April 10, 2020 - link

    you just said the same thing as before only using more words. "dram is faster therefore nand cant compete" completely missing the point that SsD is becoming fast enough for many applications. and missing the fact sad are improving faster than dram in all of size, price, bandwidth and latency. than dram in the past 5 years. I'll wait 4 your mea culpa in a few years
  • schujj07 - Friday, April 10, 2020 - link

    Wow you are such a hypocrite. You complain that I post without facts, then you complain when I listed in-depth facts. Your belief with SSD is so engrained in your being that there is NOTHING anyone here could say, write, prove, etc... for you to realize you are wrong. For us IT Professionals you are like talking to a wall or are just a pebcak error.
  • rhysiam - Saturday, April 11, 2020 - link

    @azfacea - the main hole in your argument is latency. When you say "SSDs are improving faster than DRAM in... latency" you are **sort of** correct. Absolutely, more sophisticated controllers as well as the NVMe protocol and PCIe interfaces have demonstrably reduced the latency of high end SSDs, particularly when they are subjected to intensive workloads. What you are referencing here is the overall latency - or "response time" of SSDs, which includes interface, protocol and controller overheads. At that level, you are absolutely correct when you say latency is improving (on high end/enterprise devices).

    However, here's the problem: As Kristian Vättö pointed out to you above, the fundamental design of NAND flash means that it takes substantial time to read, program and (critically!) erase the cell. This latency, inherent to NAND flash cells, hasn't really changed. In fact if anything, the move from SLC -> MLC -> TLC -> QLC, which is a significant driver in the capacity and price improvements you are touting, has seen a substantial increase in latency at the NAND flash layer. Even ignoring this, according to Kristian's numbers above, we're still at the level of 10s of microseconds for NAND vs 10s of nanoseconds for DRAM. That's in the region of 1000 times higher latency.

    Here are two articles by Kristian Vättö which explore how NAND flash works (planar and 3D-NAND) and are useful in understanding the fundamental physics at play here. Kristian used to be the SSD editor here at Anandtech and now works for Samsung.
    - https://www.anandtech.com/show/5067/understanding-...
    - https://www.anandtech.com/show/8216/samsung-ssd-85...
    They're old articles, but that doesn't matter. The fundamental design hasn't changed and (obviously!) fundamental physics don't change.

    3D-XPoint marketing hype claimed it would approach DRAM latency, but it's still nowhere near responsive enough to challenge DRAM, even ~3 years (?) after release. Maybe RE-RAM or MRAM or some other technology comes along and challenges the DRAM "triad", but that absolutely will require new technology.

    We're not going to get anywhere near DRAM latency by iterating on existing NAND flash technology. Ryzen CPUs have shown us we can get measurable system-wide performance gains from dropping particular RAM latency settings from 18, to 16, to 14 clock cycles. Remember, NAND flash at the cell level is in the order of 1000 times slower. It's simply not fit-for-purpose as a challenger for DRAM.

    TL;DR: NAND Flash cells, because of physics, have vastly highly latency than DRAM. While it's good to be optimistic and excited about the future of tech, NAND latency simply isn't going to get anywhere near DRAM through gradual iteration, which seems to be what you are suggesting.
  • schujj07 - Saturday, April 11, 2020 - link

    Thank you @rhysiam for going even further in explaining this. If I remember correctly MRAM is already as fast as DRAM but its capacity is so much smaller that it isn't feasible for general computing. I do believe it is used in satellites though.

    @azfacea if you still don't believe us build a computer with the fastest PCIe 4 SSD you can find and only 1GB RAM. Install Win 10 onto it and try using the computer. You will see the effect of much higher latency and the protocol overhead first hand. That computer will be absolutely impossible to use, especially if you try having multiple browser tabs open. The lack of RAM is the problem. I have a laptop from 2013 with 4GB RAM and Win 10 on it with a Samsung 850 Evo SSD. Even having 2GB "free" on bootup this laptop is slow. It is having to disk swap so much that it affects the performance quite a lot. Remember that is with 4x more RAM than I am saying you try.
  • rhysiam - Saturday, April 11, 2020 - link

    I wonder if we'll get a response?
  • schujj07 - Monday, April 13, 2020 - link

    Doesn't look like it. I think @azfacea has crawled back under his/her bridge to sulk at the beating he took from experts.
  • rhysiam - Tuesday, April 14, 2020 - link

    I don't want to beat anyone! I'd just like to debate things based on a healthy mixture of knowledge/understanding as well as curiosity/humility.

    Never mind. At least if we see similar perspectives being touted in future we can just point him/her back to this discussion.
  • back2future - Saturday, April 11, 2020 - link

    If there is need for very high amount of fast swap space, than nowadays bandwidth and latencies of storage can compete almost with devices from generation before (or some even <1 decade).
    With DDR4 there is enough bandwidth for common applications and with widespread mainboards that most users wont experience limiting performance on daily usage patterns.
    If DDR4 (DDR3?) is that much cheaper than later DDR5 one would need ~35 32GB ram modules (each on todays price level for around 1/2$ for each GB) at a ~500$ cost for 1TB.

Log in

Don't have an account? Sign up now