A few weeks ago a very smart friend of mine sent me an email asking why we haven’t seen more PCIe SSDs by now. While you can make the argument for keeping SATA around as an interface with traditional hard drives, it ends up being a bottleneck when it comes to SSDs. The move to 6Gbps SATA should alleviate that bottleneck for a short period, but it is easy enough to put NAND in parallel that you could quickly saturate it as well. So why not a higher bandwidth interface like PCIe?

The primary reason appears to be cost. While PCIe can offer much more bandwidth than SATA, the amount of NAND you’d need to get there and the controllers necessary would be cost prohibitive. The unfortunate reality is that good SSDs launched at the worst possible time. The market would’ve been ripe in 2006 - 2007, but in the post recession period getting companies to spend even more money on PCs wasn’t very easy. A slower than expected SSD ramp put the brakes on a lot of development on exotic PCIe SSDs.

We have seen a turnaround however. At last year’s IDF Intel showed off a proof of concept PCIe SSD that could push 1 million IOPS. And with the consumer SSD market dominated by a few companies, the smaller players turned to building their own PCIe SSDs to go after the higher margin enterprise market. Enterprise customers had the budget and the desire to push even more bandwidth. Throw a handful of Indilinx controllers on a PCB, give it a good warranty and you had something you could sell to customers for over a thousand dollars.

OCZ was one of the most eager in this space. We first met their Z-Drive last year:

The PCIe x8 card was made up of four Indilinx barefoot controllers configured in RAID-0, delivering up to four times the performance of a single Indilinx SSD but on a single card. That single card would set you back anywhere between $900 - $3500 depending on capacity and configuration.

With the SSD controllers behind a LSI Logic RAID controller there was no way to get TRIM commands to the data. OCZ instead relied on idle garbage collection to keep Z-Drive owners happy. Even today the company is still working on bringing a TRIM driver to Z-Drive owners.

The Z-Drive apparently sold reasonably well. Well enough for OCZ to create a follow on drive: the Z-Drive R2. This card uses custom NAND cards that would allow users to upgrade their drive capacity down the line. The cards are SO-DIMMs populated with NAND, available only through OCZ. The new Z-Drive still carries the hefty price tag of the original.

Ryan Petersen, OCZ’s CEO, hopes to change that with a new PCIe SSD: the OCZ RevoDrive. Announced at Computex 2010, the RevoDrive uses SandForce controllers instead of the Indilinx controllers of the Z-Drives. The first incarnation uses two SandForce controllers in RAID-0 on a PCIe x4 card. As far as attacking price: how does $369 for 120GB sound? And it is of course bootable.

OCZ sent us the more expensive $699.99 240GB version but the sort of performance scaling we'll show here today should apply to the smaller, more affordable card as well. Below is a shot of our RevoDrive sample:

The genius isn’t in the product, but in how OCZ made it affordable. Looking at the RevoDrive you’ll see the two SandForce SF-1200 controllers that drive the NAND, but you’ll also see a Silicon Image RAID controller and a Pericom PI7C9X130 bridge chip.

The Silicon Image chip is a SiI3124 PCI-X to 4 port 3Gbps SATA controller. The controller supports up to four SATA devices, which means OCZ could make an even faster version of the RevoDrive with four SF-1200 controllers in RAID.

Astute readers will note that I said the Sil3124 chip is a PCI-X to SATA controller. The Pericom bridge converts PCI-X to a PCIe x4 interface which is what you see at the bottom of the card.

The Pericom PCI-X to PCIe Bridge

Why go from SATA to PCI-X then to PCIe? Cost. These Silicon Image PCI-X controllers are dirt cheap compared to native PCIe SATA controllers, and the Pericom bridge chip doesn’t add much either. Bottom line? OCZ is able to offer a single card at very little premium compared to a standalone drive. A standard OCZ Vertex 2 E 120GB (13% spare area instead of 22%) will set you back $349.99. A 120GB RevoDrive will sell for $369.99 ($389.99 MSRP), but deliver much higher performance thanks to you having two SF-1200 controllers in RAID on the card.

You’ll also notice that at $369.99 a 120GB RevoDrive is barely any more expensive than a single SF-1200 SSD, and it’s actually cheaper than two smaller capacity drives in RAID. If OCZ is actually able to deliver the RevoDrive at these prices then the market is going to have a brand new force to reckon with. Do you get a standard SATA SSD or pay a little more for a much faster PCIe SSD? I suspect that many will choose the latter, especially because unlike the Z-Drive the RevoDrive is stupidly fast in desktop workloads.

If you’re wondering how this is any different than a pair of SF-1200 based SSDs in RAID-0 using your motherboard’s RAID controller, it’s not. The OCZ RevoDrive will offer lower CPU utilization than an on-board software based RAID solution thanks to its Silicon Image RAID controller, but the advantage isn’t huge. The only reason you’d opt for this over a standard RAID setup is cost and to a lesser extent, simplicity.

What’s that Connector?

When I first published photos of the Revo a number of readers wondered what the little connector next to the Silicon Image RAID controller was. Those who guessed it was for expansion were right: it is.

Unfortunately that connector won’t be present on the final RevoDrive shipped for mass production. At some point we may see another version of the Revo with that connector. The idea is to be able to add a daughterboard with another pair of SF-1200 controllers and NAND to increase capacity and performance of the Revo down the line. Remember that Silicon Image controller has four native SATA ports stemming off of it, only two are currently in use.

Installation and Early Issues


View All Comments

  • nurd - Saturday, June 26, 2010 - link

    The SiI 3124 is just a standard SATA controller; the RAID is software.

    And not everybody uses drivers written by Silicon Image, or for Windows :)
  • Nomgle - Monday, July 5, 2010 - link

    Erm, that's completely wrong - i suggest you read this review again, and pay careful attention to the RAID-setup screenshots...

    The Silicon Image 3124 used on this card, IS a RAID controller, and does require drivers.
  • vol7ron - Friday, June 25, 2010 - link

    "The PCIe x8 card was made up of four Indilinx barefoot controllers configured in RAID-0, delivering up to four times the performance of a single Indilinx SSD but on a single card."

    Is this something that you witnessed?

    When you have 4 channels of RAID-0, I thought the performance was more exponential. 2 drives/memory chips in parallel may be twice the performance, but 3 drives would be more like 4+ times times the performance.

    I think having the daughter board would really change things.

    Also, doesn't Intel have a TRIM driver for their RAID controller?

  • Mr Perfect - Friday, June 25, 2010 - link

    It should be linear growth, minus overhead.

    Performance would have to be additive. Three drives can't be four times the performance of one drive. If one drive achieves 55.7MB/s, then you could theoretically get 55.7x3=167.1MB/s from three or 55.7x4=222.8MB/s from four. Considering each drive will only ever be able to put out 55.7MB/s, then how could three achieve 222.8 total? Dividing the 222.8MB/s by 3 would give you 74.2 MB/s output from each drive, when they are physically only capable of 55.7MB/s each. The math would get even wonkier as you scaled higher up the exponential curve.
  • kmmatney - Friday, June 25, 2010 - link

    You really need to include SSDs and hard drives in the Benchmarks feature of this website. It would really help for people upgrading from older drives, such as first gen drives, or other drives that you wouldn't be able to inlucde in teh benchamrks for every single review. Reply
  • knowom - Friday, June 25, 2010 - link

    I'm still waiting on a modern I-Ram priced reasonably with PCI-E bandwidth with a flash card slot for data retention preferably accessible from the PCI-E retention bracket for convenient access and ability to make it hot swappable and DDR3 dimm slots angled diagonally so you could fit more dimm slots and the manufacturer could fit more easily by elongating the PCB like with video cards as well.

    How a modern I-Ram device would be done ideally
    except angled more optimally for capacity in mind
    | | dimm slots |
    | Flash | / / / / / / / / |
    | | / / / / / / / / |
    | ---------------- / / / / / / / / |
    | / / / / / / / / |
    | PCI-E / / / / / / / / |
  • iwodo - Saturday, June 26, 2010 - link

    Until SandForce SATA 3.0 version of Controller comes out. It will be faster then Revo.

    The Next Mile Stone is 1GB/s, while making it stays the same price........
  • sunshine - Saturday, June 26, 2010 - link

    Regarding the 64GB Crucial RealSSD C300:

    This 64 GB version of this SSD, has a much slower write speed than the 256 GB version.

    Write speeds vary with capacity:

    70MB/s for the 64GB model, 140MB for 128GB and 215MB/s for the 256GB.

    So apparently there is a trade off, lower price, but lower speed as well.
  • lukeevanssi - Saturday, June 26, 2010 - link

    I am thinking of buying a E-revo 1/16 scale. I was wondering how well does this truck drive on grass if i put dual battery packs on it?. Can it climb well on dirt mounds? Thanks
    if anybody want to know more about it so plz visit this link:-
    there is a lot off knowledge about this product
  • 529th - Saturday, June 26, 2010 - link

    My Vertex LE died about 2 weeks ago. Reply

Log in

Don't have an account? Sign up now