The SandForce Roundup: Corsair, Kingston, Patriot, OCZ, OWC & MemoRight SSDs Compared
by Anand Lal Shimpi on August 11, 2011 12:01 AM ESTIt's a depressing time to be covering the consumer SSD market. Although performance is higher than it has ever been, we're still seeing far too many compatibility and reliability issues from all of the major players. Intel used to be our safe haven, but even the extra reliable Intel SSD 320 is plagued by a firmware bug that may crop up unexpectedly, limiting your drive's capacity to only 8MB. Then there are the infamous BSOD issues that affect SandForce SF-2281 drives like the OCZ Vertex 3 or the Corsair Force 3. Despite OCZ and SandForce believing they were on to the root cause of the problem several weeks ago, there are still reports of issues. I've even been able to duplicate the issue internally.
It's been three years since the introduction of the X25-M and SSD reliability is still an issue, but why?
For the consumer market it ultimately boils down to margins. If you're a regular SSD maker then you don't make the NAND and you don't make the controller.
A 120GB SF-2281 SSD uses 128GB of 25nm MLC NAND. The NAND market is volatile but a 64Gb 25nm NAND die will set you back somewhere from $10 - $20. If we assume the best case scenario that's $160 for the NAND alone. Add another $25 for the controller and you're up to $185 without the cost of the other components, the PCB, the chassis, packaging and vendor overhead. Let's figure another 15% for everything else needed for the drive bringing us up to $222. You can buy a 120GB SF-2281 drive in e-tail for $250, putting the gross profit on a single SF-2281 drive at $28 or 11%.
Even if we assume I'm off in my calculations and the profit margin is 20%, that's still not a lot to work with.
Things aren't that much easier for the bigger companies either. Intel has the luxury of (sometimes) making both the controller and the NAND. But the amount of NAND you need for a single 120GB drive is huge. Let's do the math.
8GB IMFT 25nm MLC NAND die - 167mm2
The largest 25nm MLC NAND die you can get is an 8GB capacity. A single 8GB 25nm IMFT die measure 167mm2. That's bigger than a dual-core Sandy Bridge die and 77% the size of a quad-core SNB. And that's just for 8GB.
A 120GB drive needs sixteen of these die for a total area of 2672mm2. Now we're at over 12 times the wafer area of a single quad-core Sandy Bridge CPU. And that's just for a single 120GB drive.
This 25nm NAND is built on 300mm wafers just like modern microprocessors giving us 70685mm2 of area per wafer. Assuming you can use every single square mm of the wafer (which you can't) that works out to be 26 120GB SSDs per 300mm wafer. Wafer costs are somewhere in four digit range - let's assume $3000. That's $115 worth of NAND for a drive that will sell for $230, and we're not including controller costs, the other components on the PCB, the PCB itself, the drive enclosure, shipping and profit margins. Intel, as an example, likes to maintain gross margins north of 60%. For its consumer SSD business to not be a drain on the bottom line, sacrifices have to be made. While Intel's SSD validation is believed to be the best in the industry, it's likely not as good as it could be as a result of pure economics. So mistakes are made and bugs slip through.
I hate to say it but it's just not that attractive to be in the consumer SSD business. When these drives were selling for $600+ things were different, but it's not too surprising to see that we're still having issues today. What makes it even worse is that these issues are usually caught by end users. Intel's microprocessor division would never stand for the sort of track record its consumer SSD group has delivered in terms of show stopping bugs in the field, and Intel has one of the best track records in the industry!
It's not all about money though. Experience plays a role here as well. If you look at the performance leaders in the SSD space, none of them had any prior experience in the HDD market. Three years ago I would've predicted that Intel, Seagate and Western Digital would be duking it out for control of the SSD market. That obviously didn't happen and as a result you have a lot of players that are still fairly new to this game. It wasn't too long ago that we were hearing about premature HDD failures due to firmware problems, I suspect it'll be a few more years before the current players get to where they need to be. Samsung may be one to watch here going forward as it has done very well in the OEM space. Apple had no issues adopting Samsung controllers, while it won't go anywhere near Marvell or SandForce at this point.
90 Comments
View All Comments
V3ctorPT - Thursday, August 11, 2011 - link
Exactly what I think, I have an X25-M 160Gb and that thing is still working flawlessly with the advertised speeds, every week he gets the Intel Optimizer and it's good...Even my Gskill Falcon 1 64Gb is doing great, no BSOD's, no unexpected problems, the only "bad" thing that I saw was in SSD Life Free, when it say's my SSD is at 80% of NAND wear n' tear, my Intel is at 100%.
CrystalDisk Info confirms those conditions (that SSD Life reports), Anand, do you think these "tools" are trust worthy? Or they're some sort of scam?
SjarbaDarba - Sunday, August 14, 2011 - link
Where I work - we have had 265 Vertex II drives come back since June 2010.That's one every day or two since for our 1 store, hardly reliable tech.
Ikefu - Thursday, August 11, 2011 - link
"a 64Gb 25nm NAND die will set you back somewhere from $10 - $20. If we assume the best case scenario that's $160 for the NAND alone"I think you meant to say an 8Gb Nand die will set you back $10-$20. Not 64Gb
Yay math typos. Those are always hard to catch.
bobbozzo - Thursday, August 11, 2011 - link
No, 64Gb = 8GBNote the capitalization/case.
Ryan Smith - Thursday, August 11, 2011 - link
We're using gigaBITs (little b), not gigaBYTEs (big B).64Gb x 16 modules / 8 bits-to-bites = 128GBytes.
Ikefu - Thursday, August 11, 2011 - link
Ah Capitalization for the loss, I see my error now. Thank you =)Later in the article they refer to 8GB so the switch from Gigabits to Gigabytes through me.
philosofool - Thursday, August 11, 2011 - link
I made the same mistake at first.Can I request that, in the future, we write either in terms of bytes or bits for the same type of part? There's no need to switch from bits to bytes when talking about storage capacity and you just confuse a reader or two when you do.
nbrenner - Thursday, August 11, 2011 - link
I understand the GB vs Gb argument, but even if it takes 8 modules to make up 64Gb it was stated that a 64Gb die would set you back $10-$20, so saying a 128Gb drive would cost $160 didn't make any sense until 3 paragraphs later when it said the largest die you could get is 8GB.I think most of us read that if 64Gb is $10-$20, then why in the world would it cost $160 to get to 128Gb?
Death666Angel - Friday, August 12, 2011 - link
Unless he edited it, it clearly states "128GB". I think the b=bit and B=byte is quite clear, though I would not complain if they stick with one thing and not change it in between. :-)Mathieu Bourgie - Thursday, August 11, 2011 - link
Once again, a fantastic article from you Anand on SSDs.I couldn't agree more on the state of consumer SSDs and their reliability (or lack of...).
The problem as you mentioned is the small margins that manufacturers are getting (if they are actually manufacturing it...), which results in less QA than required and products that launch with too many bugs. The issue is, this won't go away, because many customers do want the price per GB to go down before they'll buy. Probably waiting for that psychological $1 per GB, that same 1$ per GB that HDDs reached many years ago.
With prices per GiB (actual capacity in Windows) dropping below $1.50, reliability is one of the last barrier for SSDs to actually become mainstream. Most power users now have one or are considering one, but SSDs are still very rare in most desktops/laptops sold by HP, Dell and the like. Sometimes they will be offered as an option (with additional cost), but rarely as a standard drive (only a handful or two of exceptions come to mind for laptops).
I can only hope that the reliability situation improves, because I do wish to see a major computing breakthrough, that is for SSDs to replace HDDs entirely one day. As you said years ago in an early SSD article, once you had a SSD, you can't go without one.
My desktop used to have two Samsung F3 1TB in RAID 0. Switching to it from my laptop (which had an Intel 120GB X25-M G2) was almost painful. Being accustomed to the speed of the SSD, the HDDs felt awfully slow. And I'm talking about two top of the line (besides raptors) HDDs in RAID 0 here, not a five year old IDE HDD here.
It's always a pleasure to read your articles Anand, keep up the outstanding work!