JMicron JMF667H Reference Design (128GB & 256GB) Review
by Kristian Vättö on May 29, 2014 9:00 AM ESTPerformance Consistency
Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.
To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.
We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.
Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.
For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.
JMicron JMF667H (Toshiba NAND) | JMicron JMF667H (IMFT NAND) | WD Black2 | Samsung SSD 840 EVO mSATA | Crucial M550 | |||||
Default | |||||||||
25% OP |
Compared to the WD Black2, there has certainly been improvement in the IO consistency area. When looking at the model equipped with IMFT's NAND (similar to the Black2), the line at ~5,000 IOPS is now much thicker, meaning that more IOs are happening at the 5K range instead of below it. Another big plus is that the IOPS no longer drops to zero, which is something that the Black2 constantly did. The minimum performance is still in the order of a few hundred IOPS, which compared to many competitors is not exactly great, but it is a start.
The model with Toshiba NAND performs substantially better. Even the 128GB model with the same 7% over-provisioning can keep the IOPS at around 3,000 at the lowest. The NAND can certainly have a play in steady-state performance and IO consistency as IMFT's 20nm 128Gbit NAND has twice the pages per block (512 vs 256) compared to Toshiba's A19nm 64Gbit NAND. More pages means that the erase time for each block is longer, which means that the controller may have to wait longer for an empty block. Remember that the drives are effectively doing read-modify-write during steady-state and the longer it takes to erase one block, the lower the performance will be.
It is likely that the drops in performance are caused by this but I am wondering if it's just a matter of poor optimization or something else. Technically it should be possible to achieve a steady line regardless of the NAND as long as the firmware is optimized for the specific NAND. In the end, the block erase times can be predicted pretty well, so with the right algorithms it should be possible to drop the maximum IOPS by a bit to achieve more consistent performance. For a low-end client drive, it is not that significant as the drives are very unlikely to be put under such heavy workload but I think there is still a chance to improve the IO consistency especially in the model with IMFT NAND.
Fortunately there is one easy way to increase IO consistency: over-provisioning. The IO consistency scales pretty well with over-provisioning, although there are still dips in performance that I would not like to see. However, especially the 240GB Toshiba model provides excellent consistency with 25% over-provisioning and beats, for instance, Intel's SSD 530 and SanDisk's Extreme II, which is something I certainly did not expect.
JMicron JMF667H (Toshiba NAND) | JMicron JMF667H (IMFT NAND) | WD Black2 | Samsung SSD 840 EVO mSATA | Crucial M550 | |||||
Default | |||||||||
25% OP |
JMicron JMF667H (Toshiba NAND) | JMicron JMF667H (IMFT NAND) | WD Black2 | Samsung SSD 840 EVO mSATA | Crucial M550 | |||||
Default | |||||||||
25% OP |
TRIM Validation
To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 30 minutes. After the torture I TRIM'ed the drive (quick format in Windows 7/8) and ran HD Tach to make sure TRIM is functional.
And it is.
28 Comments
View All Comments
mflood - Thursday, May 29, 2014 - link
Looks like a great product for last year. This might help JMicron capture some of the OEM market - maybe even some budget enthusiast SSDs. What JMicron didn't do was swoop in with a M2 x4 PCIExpress controller. I'm done with 6Gbps SATA.romrunning - Thursday, May 29, 2014 - link
Ahhhh.... JMicron - like a phoenix from the ashes. Fool me once, shame on you. Fool me twice...Oh well, someone has to be at the bottom of the barrel.
romrunning - Thursday, May 29, 2014 - link
I guess their selling point would be that they're cheaper than the Crucial M500. So if pricing is all-important to you (and why are you buying a SSD if price is all-important), then they would be a contender.The_Assimilator - Thursday, May 29, 2014 - link
Please stop insulting barrels.moridinga - Thursday, May 29, 2014 - link
"The JMF667H is not perfect and there are a couple of things I would like to see. The first one is support for TCG Opal 2.0 and IEEE-1667 encryption standards."IEEE-1667 is not an encryption standard. Perhaps you meant IEEE-1619
Kristian Vättö - Thursday, May 29, 2014 - link
Nope. IEEE-1667 is for storage devices, at least the version I'm looking at.http://www.ieee1667.com/download/informational-doc...
moridinga - Friday, May 30, 2014 - link
Yes, it is authentication and discovery for storage devices (particularly removable/portable ones). It says nothing about encryption.Gigaplex - Saturday, May 31, 2014 - link
IEEE-1667 is a requirement for Microsofts Bitlocker eDrive, which is encryption.KAlmquist - Thursday, May 29, 2014 - link
Contrary to the history given in the introduction, it was with the the Indilinx "barefoot" controller that was the game changer. Once Indrilinx entered the market, SSD's based on the J-Micron controller could only be sold to consumers who didn't understand what they were buying. The name J-Micron became toxic because if you wanted to advise somebody on buying an SSD, your first and last words would be, "whatever you do, don't buy and SSD with a J-Micron controller."In contrast, Sandforce's first generation controller was an incremental improvement over what came before it. There's nothing wrong with that, but the term "game changer" doesn't apply.
HisDivineOrder - Thursday, May 29, 2014 - link
Seems like that early history explanation really misses the point that Indilinx was the first real competition to Intel back then.Sandforce came along and was the first company to put Intel down. But Indilinx was the first company that convinced people they could live with non-Intel SSD's and not... be JMicron'ed.