The SPC-1(E) benchmark is the standard high-intensity test for block storage, consisting of very stringent rules and a standard test suite.
SPC-1 is one of the worst things you can do to a disk array. The benchmark itself does a lot of writes (about 60%), is highly random and is hostile to most caching systems. Which neatly explains why IBM has all kinds of system submissions but doesn’t show XIV, and the complete absence of another prominent vendor (look at the submissions, you’ll figure it out – the big boys of storage are NetApp, IBM, HDS, HP and one more ).
That same vendor might complain that SPC-1 is not always representative of real-life workloads but, short of putting all possible systems in your datacenter, nothing really will represent exactly how you massage your data. At least SPC-1 is a well-established standard and a great torture test for systems. All the other vendors are participating after all. And, interestingly, the SPEC SFS NAS benchmark doesn’t seem to bother said vendor’s anti-SPC crew none (spec.org). How come that one is more “real”? (NetApp participates in both block and NAS standard benchmarks BTW, since our systems all do both).
Some things to look for when trying to decipher SPC-1 results:
- Type of RAID used (RAID-DP, RAID10, RAID5, RAID6)
- How many drives were used to get the final result
- The cost for the configuration
- The price/performance
- How much of the storage was usable, how much was unused…
For instance – a system that can do 50,000 SPC-1 IOPS with 100 disks and RAID6, is far more efficient than one that needs 200 disks and RAID10 to achieve the same result.
My favorite way of reading the results is figuring out the effective IOPS per drive, see how close (or far) it is from the 220 IOPS a normal modern 15K drive can sustain without RAID, with good response times.
So, without further ado, looky here… it’s the link to the results page showing all the vendors. Or here for the full details. 68,000 sustained IOPS with 120 ordinary 300GB drives and just 2 Flash Cache modules, with 84% of the usable space occupied.
Since some people don’t get the 84% figure:
Out of all the actual writeable space in the system, 84% was used up by the SPC-1 data.
The way to calculate it for any system:
Go to the SPC-1 results here.
Find the “Storage Capacities and Relationships” section (close to the beginning, fourth page or so):
The writeable space is 21,659.386GB + 4,248.373GB = 25,907.759 GB. The benchmark occupied 21,659.386GB, or about 84% of the writeable space.
If that’s too much work, look at the “SPC-1 Storage Capacity Utilization” – and focus on the “Application Utilization” number.
The Application Utilization percentage is, quite simply, the benchmark data (ASU) capacity as a percentage of total Physical Storage Capacity.
With round numbers, if someone bought 100TB of raw storage (“raw” meaning before any overheads, any spares or any RAID), the benchmark used 60% of the raw capacity.
Compare that to any other result. Most other systems use RAID10, so the “Application Utilization” number is usually well below 50%.
What all this means to you:
The effective IOPS per drive for the NetApp 3270 submission are 567. Next best is around 400, most vendors can’t break 300, and the highest scoring systems (relying on thousands of drives and many controllers) don’t break 200.
It is important to note that NetApp is the only vendor in the list showing results with dual-parity RAID-DP (RAID6 equivalent protection). All other vendors are doing RAID10! If your vendor is selling you RAID5, that’s not representative of their systems in the chart!
Fortunately, there’s a way to calculate RAID5 or RAID6 IOPS for other manufacturers…
The NetApp result boils down to 13,600 sustained IOPS per shelf of 15K drives, and a system cost that’s very reasonable for the reliability, performance and features provided.
What this means to the anti-NetApp FUD club with their complex auto-tiering schemes that need 15 types of drives…
You really need to figure out how to present a decent result with:
- RAID6 (otherwise your RAID1 or RAID5 protection is inferior to NetApp RAID-DP, especially when talking about large pools)
- Your fancy auto-tiering algorithm showing no performance degradation on the unpredictable SPC-1 workload while still storing data on all drive tiers (otherwise, it’s single-tiering, and not auto-tiering)
- Large caches. If your competitive product can use Megacaches, and you claim you can do efficient write caching with them, how about we all see how effective that is? After all, you claim that’s a huge benefit. We show the world ours, show yours. Otherwise, your product is only fast on Powerpoint slides, and I’ve yet to see a product fail on Powerpoint.
Stand by for more results from the bigger boxes, this wasn’t one of them, but it is a realistic system companies could actually afford and not a $7m all-SSD config like some others have…