SPEC SFS is more cache-friendly than the brutal SPC-1, click here for some more information regarding this industry-standard NAS benchmark. The idea is that thousands of CIFS and NFS servers have been profiled and the benchmark reflects real-life NAS usage patterns.
In the same vein as the SPC-1 benchmarks, the configurations we submit to the standard benchmarking authorities are based on realistic systems customers could buy, not $7m lab queens. So, NetApp SPEC and SPC submissions:
- Are always tested with RAID-DP (RAID-6 protection equivalent) – other vendors test with RAID10 most of the time, and never with RAID-6 (ask them why this is, BlueArc gets respect for being the only other one in the list doing our level of protection)
- Have a target of using the most cost-effective configuration possible
- Provide not just high IOPS but also very low latency
- Are a realistic, deployable configuration, not just the fastest box we have (we still have the 1 million SPEC ops record for a 24-node system, that’s kind of pricy plus the result is old and can’t be compared with the current benchmark code – still, look at the rankings).
So, with those lofty goals in mind, we have 3 new submissions:
- CIFS benchmark, 3210 w/ SATA drives – typical low/mid-range system
- NFS benchmark, 3270 w/ SAS drives – typical mid-range system, no Flash Cache used in this one.
- NFS benchmark, 6240 w/ SAS drives – typical high-end (but not highest) system.
All NetApp systems included some Flash Cache memory boards to provide further acceleration (EDIT: aside from the 3270). We have an even faster system (6280) that we will be submitting later on as a special treat (there’s a certain degree of red tape and ceremony to even do one submission…)
Here’s an abbreviated chart in easily digestible form – showing the most recent results from perennial rivals NetApp and EMC (BTW – of all the systems in the chart, only one of them is truly unified and can provide block and NAS on the same architecture without the need for contortions).
|System||Result (higher is better)||Overall Response Time (lower is better)||# Disks||Exported Capacity in TB||RAID||Protocol|
|NetApp 3210||64292||1.50||144x 1TB SATA||87||RAID-DP||CIFS|
|NetApp 3270||101183||1.66||360x 15K RPM 450GB SAS||110||RAID-DP||NFS|
|NetApp 6240||190675||1.17||288x 15K RPM 450GB SAS||85||RAID-DP||NFS|
|EMC NS-G8 on V-Max||118463||1.92||Bunch o’ SSD (96 fancy STEC 400GB ZeusIOPS)||17||RAID-10||CIFS|
|EMC NS-G8 on V-Max||110621||2.32||Bunch o’ SSD (96 fancy STEC 400GB ZeusIOPS)||17||RAID-10||NFS|
|EMC VG8 on V-Max||135521||1.92||312x 15K RPM 450GB FC||19||RAID-10||NFS|
Guide to reading the chart, and lessons learned:
- A “puny” NetApp 3210 with SATA gets better overall response time than an all-SSD V-Max costing well over 10x
- Notice the amount of usable space on NetApp systems, with even better protection than RAID10
- The 6240 scored far higher even though it had less disks
- The NetApp systems have “just” 2 controllers that do everything, vs. the EMC submissions with 4 V-Max engines, plus extra Celerra Data Movers and Control Stations on top. What do you think is more efficient?
In slide format:
I do have some questions to ask certain other vendors as a parting shot:
- Sun/Oracle – you keep saying your new boxes are a cheaper way to get NetApp-type functionality, you’ve had them for a while, why not submit to SPEC or SPC? (there is not a single SPEC result from Sun).
- EMC – maybe show the world how a system not based on V-Max runs? With RAID-6? (Even V-Max with RAID6, no problem…)
- EMC: What’s the deal with the exported capacity, even with 312x drives?
- All of you with large striped pools of RAID5 – have you bothered explaining to your customers what will happen to the pool if you have a dual-drive failure in any RAID group? Unacceptable.