Proper Testing vs Real World Testing

The idea for this article came from seeing various people attempt product testing. Though I thought about storage when writing this, the ideas apply to most industries.

Three different kinds of testing

There are really three different kinds of testing. 

The first is the incomplete, improper, useless in almost any situation testing. Typically done by people that have little training on the subject. Almost always ends in misleading results and is arguably dangerous, especially if used to make purchasing decisions.

The second is what’s affectionately and romantically called “Real World Testing”. Typically done by people that will try to simulate some kind of workload they believe they encounter in their environment, or use part of their environment to do the testing. Much more accurate than the first kind, if done right. Usually the workload is decided arbitrarily :)

The third and last kind is what I term “Proper Testing”. This is done by professionals (that usually do this type of testing for a living) that understand how complex testing for a broad range of conditions needs to be done. It’s really hard to do, but pays amazing dividends if done thoroughly.

Let’s go over the three kinds in more details, with some examples.

Useless Testing

Hopefully after reading this you will know if you’re a perpetrator of Useless Testing and never do it again. 

A lot of what I deal with is performance, so I will draw examples from there.

Have you or someone you know done one or more of the following after asking to evaluate a flashy, super high performance enterprise storage device?

  • Try to test said device with a single server
  • With only a couple of I/O ports
  • With a single LUN
  • With a single I/O thread
  • Doing only a certain kind of operation (say, all random reads)
  • Using only a single I/O size (say, 4K or 32K) for all operations
  • Not looking at latency
  • Using extremely compressible data on an array that does compression

I could go on but you get the idea. A good move would be to look at the performance primer here before doing anything else…

Pitfalls of Useless Testing

The main reason people do such poor testing is usually that it’s easy to do. Another reason is that it’s easy to use it to satisfy a Confirmation Bias

The biggest problem with such testing is that it doesn’t tell you how the system behaves if exposed to different conditions. 

Yes, you see how it behaves in a very specific situation, and that situation might even be close to a very limited subset of what you need to do in “Real Life”, but you learn almost nothing about how the system behaves in other kinds of scenarios.

In addition, you might eliminate devices that would normally behave far better for your applications, especially under duress, since they might fail Useless Testing scenarios (since they weren’t designed for such unrealistic situations).

Making purchasing decisions after doing Useless Testing will invariably result in making bad purchasing decisions unless you’re very lucky (which usually means that your true requirements weren’t really that hard to begin with).

Part of the problem of course is not even knowing what to test for. 

“Real World” Testing

This is what most sane people strive for.

How to test something in conditions most approximating how it will be used in real life. 

Some examples of how people might try to perform “Real World” testing:

  • One could use real application I/O traces and advanced benchmarking software that can replay them, or…
  • Spin synthetic benchmarks designed to simulate real applications, or…
  • Put one of their production applications on the system and use it like they normally would.

Clearly, any of this would result in more accurate results than Useless Testing. However…

Pitfalls or “Real World” Testing

The problem with “Real World” testing is that it invariably does not reproduce the “Real World” closely enough to be comprehensive and truly useful. 

Such testing addresses only a small subset of the “Real World”. The omissions dictate how dangerous the results are.

In addition, extrapolating larger-scale performance isn’t really possible. How the system ran one application doesn’t mean you know how it will run ten applications in parallel.

Some examples:

  • Are you testing one workload at a time, even if that workload consists of multiple LUNs? Shared systems will usually have multiple totally different workloads hitting them in parallel, often wildly different and conflicting, coming and going at different times, each workload being a set of related LUNs. For instance, an email workload plus a DSS plus an OLTP plus a file serving plus a backup workload in parallel… :)
  • Are you testing enough workloads in parallel? True Enterprise systems thrive on crunching many concurrent workloads. 
  • Are you injecting other operations that you might be doing in the “Real World”? For instance, replicate and delete large amounts of data while doing I/O on various applications? Replications and deletions have been known to happen… 😉
  • Are you testing how the system behaves in degraded mode while doing all the above? Say, if it’s missing a controller or two, or a drive or two… also known to happen.

That’s right, this stuff isn’t easy to do properly. It also takes a very long time. Which is why serious Enterprise vendors have huge QA organizations going through these kinds of scenarios. And why we get irritated when we see people drawing conclusions based on incomplete testing 😉

Proper Testing

See, the problem with the “Real World” concept is that there are multiple “Real Worlds”.

Take car manufacturers for instance.

Any car manufacturer worth their salt will torture-test their cars in various wildly different conditions, often rapidly switching from one to the other if possible to make things even worse:

  • Arctic conditions
  • Desert, dusty conditions plus harsh UV radiation
  • Extremely humid conditions
  • All the above in different altitudes

You see, maybe your “Real World” is Alaska, but another customer’s “Real World” will be the very wet Cherrapunji, and a third customer’s “Real World” might be the Sahara desert. A fourth might be a combination (cold, very high altitude, harsh UV radiation, or hot and extremely humid). 

These are all valid “Real World” conditions, and the car needs to be able to deal with all of them while maintaining full functioning until beyond the warranty period.

Imagine if moving from Alaska to the Caribbean meant your car would cease functioning. People have been known to move, too… :) 

Storage manufacturers that actually know what they’re doing have an even harder job:

We need to test multiple “Real Worlds” in parallel. Oh, and we do test the different climate conditions as well, don’t think we assume everyone operates their hardware in ideal conditions… especially folks in the military or any other life-or-death situation. 

Closing thoughts

If I’ve succeeded in stopping even one person from doing Useless Testing, this article was a success :)

It’s important to understand that Proper Testing is extremely hard to do. Even gigantic QA organizations miss bugs, imagine someone doing incomplete testing. There are just too many permutations.

Another sobering thought is that very few vendors are actually able to do Proper Testing of systems. The know-how, sheer time and numbers of personnel needed is enough to make most smaller vendors skimp on the testing out of pure necessity. They test what they can, otherwise they’d never ship anything.

A large number of data points helps. Selling millions of units of something provides a vendor with way more failure data than selling a thousand units of something possibly can. Action can be taken to characterize the failure data and see what approach is best to avoid the problem, and how common it really is.

One minor failure in a million may not be even worth fixing, whereas if your sample size is ten, one failure means 10%. It might be the exact same failure, but you don’t know that. Or you might have zero failures in the sample size of ten, which might make you think your solution is better than it really is…

But I digress into other, admittedly interesting areas.

All I ask is… think really hard before testing anything in the future. Think what you really are trying to prove. And don’t be afraid to ask for help.



Technorati Tags: , , ,

NetApp Enterprise Grade Flash

June 23rd marked the release of new NetApp Enterprise Flash technology. The release consists of:

  • New ONTAP 8.3.1 with significant performance, feature and usability enhancements
  • New Inline Storage Efficiencies
  • New All-Flash FAS systems (AFF works only with SSDs)
  • New, aggressive pricing model
  • New maintenance options (price protection on warranty for up to 7 years, even if you buy less warranty up front)

Enterprise Grade Flash

So what does “Enterprise Grade Flash” mean?


The combination of new technologies that Flash made possible, like high performance and inline storage efficiencies, plus ease of use, plus Enterprise features such as:

  • Deep application integration for clones, backups, restores, migrations
  • Comprehensive automation
  • Scale up
  • Scale out
  • Non-disruptive everything
  • Multiprotocol (SAN and NAS in the same system)
  • Secure multi-tenancy
  • Encryption (FIPS 140-2 validated hardware)
  • Hardened storage OS
  • Enterprise hardware
  • Comprehensive backup, cloning, archive
  • Comprehensive integration with leading backup software
  • Synchronous replication
  • Active-Active Datacenters
  • The ability to have AFF systems in the same cluster as hybrid
  • Seamless data movement between All-Flash and Hybrid
  • Easy replication between All-Flash and Hybrid
  • Cloud capable (storage OS ability to run in the cloud) – with replication to and from the cloud instances of ONTAP
  • QoS and SLO provisioning
  • Inline Foreign LUN Import (to make migrations from third-party arrays less impactful to the users)

The point is that competitor All-Flash offerings typically focus on a few of these points (for example, performance, ease of use and maybe inline storage efficiencies) but are severely lacking in the rest. This limited vision approach creates inflexibility and silos, and neither inflexibility nor silos are particularly desirable elements in Enterprise IT.

But what if you could have Flash without compromises? That’s what we are offering with the new AFF systems. A high performance offering with all the “coolness” but also all the “seriousness” and features Enterprise Grade storage demands.

It’s a bit like this Venn diagram:


AFF simply offers far more flexibility than All-Flash competitors. There may be certain things AFF doesn’t do vs some of the competitors, but they pale vs what AFF does and the competitors cannot.

And even if you don’t need all the features – at least they’re there waiting for you in case you do need them in the future (for instance, you may not need to replicate to the cloud and back today, but knowing that you have the option is reassuring in case your IT strategy changes).


It’s far harder to add serious enterprise data management features to newly built architectures than add new architecture benefits to a platform that already had the Enterprise Grade stuff down pat.

WAFL (the block layout engine of ONTAP) is already naturally well suited to working with SSDs:

  • Avoids modifying data in place
  • Writes to free space
  • Performs I/O coalescing in order to lump many operations in a single large I/O
  • Preserves the temporal locality of user data with metadata to further reduce I/O
  • Achieves a naturally low Write Amplification Factor (it’s worth noting that in all the years we’ve been selling Flash and after hundreds of PB, we have had exactly zero worn out SSDs – they’re not even close to wearing out).

So we optimized ONTAP where it mattered, while keeping the existing codebase where needed. It helps that the ONTAP architecture is already modular – it was fairly straightforward to enhance the parts of the code that dealt with storage media, for instance, or the parts that dealt with the I/O path. This significant optimization started with 8.3.0 and continued with 8.3.1.

The overall effect has been dramatically reduced latencies, enhanced code parallelism, increased resiliency, while at the same time enabling inline storage efficiencies. For customers still on 8.2.x and prior, or on 7-mode ONTAP, the differences with 8.3.1 will be pretty extreme… :)

Ease of Use

New with ONTAP 8.3.1 is a completely redesigned administrative GUI, and a SAN-optimized config for the ability to be serving I/O in 15 minutes from unpacking the system.

In addition, wizards allow the easy creation of LUNs for Databases with just 3 questions, and ONTAP can now be upgraded from the GUI.

Storage Efficiencies

ONTAP has had various flavors of storage efficiencies for a while. Those efficiencies typically had to be turned on manually, and often affected performance.

With ONTAP 8.3.1, Inline Compression is on by default, as is Inline Zero Deduplication (very helpful in VM deployments that use EagerZeroThick). In addition, Always On Deduplication is also available – a deduplication that runs very frequently (every 5 minutes).

In conjunction with already excellent thin provisioning plus state-of-the-art cloning and snapshot capabilities, some excellent efficiency ratios are possible. We can show up to 30:1 for certain kinds of VDI deployments, while things like Databases will not be able to be squeezed quite that much (especially if DB-side compression is already active). The overall efficiency ratio will vary depending on how the system is used.

Ultimately, Storage Efficiencies aim to reduce overall cost. Focus not so much on the actual efficiency ratio. Instead look at the effective price/TB. The efficiency differences between most vendors are probably less than most vendors want you to think they are – in real terms you will not save more than a few SSD’s worth of capacity. The real value lies elsewhere.


The AFF systems are fast. A maximum-size AFF cluster will do about 4 million IOPS at 1ms latency (8K random, with inline efficiencies). Throughput-wise, 100GB/s is possible.

We wanted to have performance stop being a discussion point except in the most extreme of situations. The reality is that any top-tier Flash solution will be fast. Once again – the real value lies elsewhere.

For the curious, here’s an IOPS vs Latency chart for a 2-node AFF8080 system, given that this level of performance is more than enough for the vast majority of customers out there (the max speed of a cluster is 12x what’s shown in this graph):


Many optimizations were implemented in ONTAP 8.3.1 – certain operations are over 4x faster than with ONTAP 8.2.x, and almost 50% faster than with 8.3.0. SSDs, being as fast as they are, benefit from those optimizations the most.

If you want more performance proof points, we have previously shown SSD performance for a very difficult workload (over 60% writes, combination of block sizes, random and sequential I/O) with 8.3.0 in our SPC-1 benchmarks (needs to be updated for 8.3.1 but even the 8.3.0 result is solid). We also have SQL, Oracle and VDI Technical Reports.

We are also always happy to demonstrate these systems for you.


There are some significant pricing changes, leading to an overall far more cost-effective solution. For instance, the cost delta between different controller models with the same amount of storage is far smaller than in the past. The scalability of all the systems is the same (240 SSDs * 1.6TB max currently per 2 nodes). The only difference is performance and the amount of connectivity possible.

Warranty pricing is now stable even if you buy 3 years up front and extend later on.

Oh – and all AFF models now include all NetApp FAS software: All the protocols, all the SnapManager application integration modules, replication, the works.

Final Words

The pace of innovation at NetApp is accelerating dramatically. We had to work hard to bring Clustered ONTAP to feature parity with the older 7-mode, which delayed things. Now with  8.3.x dropping 7-mode altogether, we have many more developers to focus on improving Clustered ONTAP. The big enhancements in 8.3.1 came very rapidly after 8.3.0 became GA… and there’s a lot more to come soon.

Make no mistake: NetApp is a storage giant and has an Engineering organization not to be trifled with.

Now, how do you like your Flash? With or without compromises? :)


Technorati Tags: , , , , , , ,

Calculating the true cost of space efficient Flash solutions

In this post I will try to help you understand how to objectively calculate the cost of space-efficient storage solutions – there’s just too much misinformation out there and it’s getting irritating since certain vendors aren’t exactly honest with how they do certain calculations…

A brief history lesson: 

The faster a storage device, the smaller and more expensive it usually is. Flash was initially insanely expensive relative to spinning disk, so it was used in small amounts, typically as a tier and/or cache augmentation.

And so it came to be that flash-based storage systems started implementing some of the more interesting space efficiency techniques around. Interesting because it’s algorithmically easy to reduce data dramatically, but hard to do under high load while maintaining impressive IOPS and low latency.

Space efficiencies plus lower flash media costs bring us to today’s ability to use all-flash storage in ever-increasingly cost-effective amounts.

But how does one figure out the best deal?

There are some factors I won’t get into in this article. Company size and viability, support staff strength, maturity of the code, automation, overall features etc. all may play a huge role depending on the environment and requirements (and, indeed, will often eliminate several of the players from further consideration). However, I want to focus on the basics.

Recommended metric: Cost per effective TB

It’s easy to get lost in the hype. One company says they reduce by 3:1, another might say 5:1, yet another 10:1, etc. The high efficiency ratios seem to be attractive, right?

Well – you’re not paying for a high efficiency ratio. What you are paying for is for usable capacity. 

If all solutions cost the same, the systems with high efficiency ratios would win this battle every day of the week and twice on Sundays.

However, solutions don’t all cost the same. Ask your vendor what the projected effective capacity will be for each specific configuration, and the Cost/Effective TB is a trivial calculation.

But there’s one more thing to do in order for the calculation to be correct:

Insist on calculating the efficiency ratio yourself.

Most storage systems will show a nice picture in the GUI with an overall efficiency ratio. Looks nice and easy. Well – the devil is in the details.

If a vendor is upfront about how they measure efficiency, your numbers might make sense.

This is where you trust but verify. Some pointers:

  • Take a note of the initial usable space before putting anything on the system.
  • If you store a 1TB DB and do nothing else to the data, what’s the efficiency? 
  • Does the number make sense given the size of the data you just put on the system and how much usable space is left now?
  • If you take 10 snapshots of the data, what’s the efficiency? How about if you delete the snaps, does the efficiency change?
  • If you take a clone of the DB, what’s the efficiency?
  • If you delete the clone you just took, what’s the efficiency?
  • Create a large LUN (10TB for example) and only store 1TB of data in it. What’s the efficiency? Do you count thin provisioning as data reduction?
  • Does this all add up if you do the math manually instead of the GUI doing it for you?
  • Does it all meet your expectations? For example, if a vendor is claiming 5:1 reduction, can you actually store 5 different DBs in the space of one? Or do they really mean something else? That’s a pretty easy test…

You see, most vendors count savings a bit differently. In the examples above, that 1TB DB, if stored in a 10TB LUN, and cloned 10 times, will probably result in a very high efficiency number. It doesn’t mean however that 10 different DBs of the same size would have nearly the same efficiency ratio.

If you don’t have time to do a test in-house, have the vendor prove their claims and show how they do their math in their labs while you watch. You will typically find that each data type has a wildly different space efficiency ratio. 

The bottom line

It’s pretty easy. Figure out the efficiency ratio on your own based on how you expect to use the system, then plug that ratio into the Price/Effective TB formula like so:

Real Cost per TB = Price/(Usable TB * Real Efficiency Ratio as a multiplier)

And, finally, a word on capacity guarantees:

Some vendors will guarantee capacity efficiencies. Always, always demand to see the fine print. If a vendor insists they will guarantee x:1 efficiency, have them sign an official legally binding agreement that has the backing of the vendor’s HQ (and isn’t some desperate local sales office ploy that might not be worth the paper it’s printed on).

Insist the guarantee states you will get that claimed efficiency no matter what you’re storing on the box.

Notice how quickly the small print will come :)



Technorati Tags: , , , ,

NetApp posts SPC-1 Top Ten Performance results for its high end systems – Tier 1 meets high functionality and high performance

It’s been a while since our last SPC-1 benchmark submission with high-end systems in 2012. Since then we launched all new systems, and went from ONTAP 8.1 to ONTAP 8.3, big jumps in both hardware and software.

In 2012 we posted an SPC-1 result with a 6-node FAS6240 cluster  – not our biggest system at the time but we felt it was more representative of a realistic solution and used a hybrid configuration (spinning disks boosted by flash caching technology). It still got the best overall balance of low latency (Average Response Time or ART in SPC-1 parlance, to be used from now on), high SPC-1 IOPS, price, scalability, data resiliency and functionality compared to all other spinning disk systems at the time.

Today (April 22, 2015) we published SPC-1 results with an 8-node all-flash high-end FAS8080 cluster to illustrate the performance of the largest current NetApp FAS systems in this industry-standard benchmark.

For the impatient…

  • The NetApp All-Flash FAS8080 SPC-1 submission places the system in the #5 performance spot in the SPC-1 Top Ten by performance list.
  • And #3 if you look at performance at load points around 1ms Average Response Time (ART).
  • The NetApp system uses RAID-DP, similar to RAID-6, whereas the other entries use RAID-10 (typically, RAID-6 is considered slower than RAID-10).
  • In addition, the FAS8080 shows the best storage efficiency, by far, of any Top Ten SPC-1 submission (and without using compression or deduplication).
  • The FAS8080 offers far more functionality than any other system in the list.

We also recently posted results with the NetApp EF560  – the other major hardware platform NetApp offers. See my post here and the official results here. Different value proposition for that platform – less features but very low ART and great cost effectiveness are the key themes for the EF560.

In this post I want to explain the current Clustered Data ONTAP results and why they are important.

Flash performance without compromise

Solid state storage technologies are becoming increasingly popular.

The challenge with flash offerings from most vendors is that customers typically either have to give up a lot in order to get the high performance of flash, or have to combine 4-5 different products into a complex “solution” in order to satisfy different requirements.

For instance, dedicated all-flash offerings may not be able to natively replicate to less expensive, spinning-drive solutions.

Or, a flash system may offer high performance but not the functionality, scalability, reliability and data integrity of more mature solutions.

But what if you could have it all? Performance and reliability and functionality and scalability and maturity? That’s exactly what Clustered Data ONTAP 8.3 provides.

Here are some Clustered Data ONTAP 8.3 running on FAS8080 highlights:

  • All the NetApp signature ultra-tight application integration and automation for replication, SnapShots, Clones
  • Fancy write-accelerated RAID6-equivalent protection by default
  • Comprehensive data integrity and protection against insidious lost write/torn page/misplaced write errors that RAID and normal checksums don’t always catch
  • Non-disruptive data mobility for all protocols
  • Non-disruptive operations – no downtime even when doing things that would require downtime and extensive PS with other vendors
  • Granular QoS
  • Deduplication and compression
  • Highly scalable – 5,760 drives possible in an 8-node cluster, 17,280 drives possible in the max 24 nodes. Various drive types in the cluster, from SSD to SATA and everything else in between.
  • Multiprotocol (FC, iSCSI, NFS, SMB1,2,3) on the same hardware (no “helper” boxes needed, no dedicated SAN vs NAS pools needed)
  • 96,000 LUNs per 8-node cluster (that’s right, ninety-six thousand LUNs – about 50% more than the maximum possible with the other high-end systems)
  • ONTAP is VMware vVol ready
  • The only array that has been validated by VMware for VMware Horizon 6 with vVols – hopefully the competitors will follow our lead
  • Over 460TB (yes, TeraBytes) of usable cache after all overheads are accounted for (and without accounting for cache amplification through deduplication and clones) in an 8-node cluster. Makes competitor maximum cache amounts seem like rounding errors – indeed, the actual figure might be 465TB or more, but it’s OK… :) (and 3x that number in a 24-node cluster, over 1.3PB cache!)
  • The ability to virtualize other storage arrays behind it
  • The ability to have a cluster with dissimilar size and type nodes – no need to keep all engines the same (unlike monolithic offerings). Why pay the same for all nodes when some nodes may not need all the performance? Why be forced to keep all nodes in the same hardware family? What if you don’t want to buy all at once? Maybe you want to upgrade part of the cluster with a newer-gen system? :)
  • The ability to evacuate part of a cluster and build that part as a different cluster elsewhere
  • The ability to have multiple disk types in a cluster and, indeed, dedicate nodes to functions (for instance, have a few nodes all-flash, some nodes with flash-accelerated SAS and a couple with very dense yet flash-accelerated NL-SAS, with full online data mobility between nodes)
That last bullet deserves a picture:


“SVM” stands for Storage Virtual Machine –  it means a logical storage partition that can span one or more cluster nodes and have parts of the underlying capacity (performance and space) available to it, with its own users, capacity and performance limits etc.

In essence, Clustered Data ONTAP offers the best combination of performance, scalability, reliability, maturity and features of any storage system extant as of this writing. Indeed – look at some of the capabilities like maximum cache and number of LUNs. This is designed to be the cornerstone of a datacenter.

it makes most other systems seem like toys in comparison…


FUD buster

Another reason we wanted to show this result was FUD from competitors struggling to find an angle to fight NetApp. It goes a bit like this: “NetApp FAS systems aren’t real SAN, it’s all simulated and performance will be slow!”



Well – for a “simulated” SAN (whatever that means), the performance is pretty amazing given the level of protection used (RAID6-equivalent – far more resilient and capacity-efficient for large pooled deployments than the RAID10 the other submissions use) and all the insane scalability, reliability and functionality on tap :)

Another piece of FUD has been that ONTAP isn’t “flash-optimized” since it’s a very mature storage OS and wasn’t written “from the ground up for flash”. We’ll let the numbers speak for themselves. It’s worth noting that we have been incorporating a lot of flash-related innovations into FAS systems well before any other competitor did so, something conveniently ignored by the FUD-mongers. In addition, ONTAP 8.3 has a plethora of flash optimizations and path length improvements that helped with the excellent response time results. And lots more is coming.

The final piece of FUD we made sure was addressed was system fullness – last time we ran the test we didn’t fill up as much as we could have, which prompted the FUD-mongers to say that FAS systems need gigantic amounts of free space to perform. Let’s see what they’ll come up with this time 😉

On to the numbers!

As a refresher, you may want to read past SPC-1 posts here and here, and my performance primer here.

Important note: SPC-1 is a 100% block-based benchmark with its own I/O blend and, as such, the results from any vendor SPC-1 submission should not be compared to marketing IOPS numbers of all reads or metadata-heavy NAS benchmarks like SPEC SFS (which are far easier on systems than the 60% write blend of the SPC-1 workload). Indeed, the tested configuration might perform in the millions of “marketing” IOPS – but that’s decidedly not the point of this benchmark.

The SPC-1 Result links if you want the detail are here (summary) and here (full disclosure). In addition, here’s the link to the “Top 10 Performance” systems page so you can compare other submissions that are in the upper performance echelon (unfortunately, SPC-1 results are normally just alphabetically listed, making it time-consuming to compare systems unless you’re looking at the already sorted Top 10 list).

I recommend you look beyond the initial table in each submission showing the performance and $/SPC-1 IOPS and at least go to the price table to see the detail. The submissions calculate $/SPC-1 IOPS based on submitted price but not all vendors use discounted pricing. You may want to do your own price/performance calculations.

The things to look for in SPC-1 submissions

Typically you’re looking for the following things to make sense of an SPC-1 submission:

  • ART vs IOPS – many submissions will show high IOPS at huge ART, which would be rather useless when it comes to Flash storage
  • Sustainability – was performance even or are there constant huge spikes?
  • RAID level – most submissions use RAID10 for speed, what would happen with RAID6?
  • Application Utilization. This one is important yet glossed over. It signifies how much capacity the benchmark consumed vs the overall raw capacity of the system, before RAID, spares etc.

Let’s go over these one by one.


Our ART was 1.23ms at 685,281.71 SPC-1 IOPS, and pretty flat over time during the test:



The SPC-1 rules state the minimum runtime should be 8 hours. We ran the test for 18 hours to observe if there would be variation in the performance. There was no significant variation:


RAID level

RAID-DP was used for all FAS8080EX testing. This is mathematically analogous in protection to RAID-6. Given that these systems are typically deployed in very large pooled configurations, we elected long ago to not recommend single parity RAID since it’s simply not safe enough. RAID-10 is fast and fine for smaller capacity SSD systems but, at scale, it gets too expensive for anything but a lab queen (a system that nobody in their right mind will ever buy but which benchmarks well).

Application Utilization

Our Application Utilization was a very high 61.92% – unheard of by other vendors posting SPC-1 results since they use RAID10 which, by definition, wastes half the capacity (plus spares and other overheads to worry about on top of that).


Some vendors using RAID10 will fill up the resulting space after RAID, spares etc. to a very high degree, and call out the “Protected Application Utilization” as being the key thing to focus on.

This could not be further from the truth – Application Utilization is the only metric that really shows how much of the total possible raw capacity the benchmark actually used and signifies how space-efficient the storage was.

Otherwise, someone could do quadruple mirroring of 100TB, fill up the resulting 25TB to 100%, and call that 100% efficient… when in fact it only consumed 25% :)

It is important to note there was no compression or deduplication enabled by any vendor since it is not allowed by the current version of the benchmark.

Compared to other vendors

I wanted to show a comparison between the Top Ten Performance results both in absolute terms and also normalized around 1ms ART.

Here are the Top Ten highest performing systems as of April 22, 2015, with vendor results links if you want to look at things in detail:

FYI, the HP XP 9500 and the Hitachi system above it in the list are the exact same system, HP resells the HDS array as their high-end offering.

I will show columns that explain the results of each vendor around 1ms. Why 1ms and not more or less? Because in the Top Ten SPC-1 performance list, most results show fairly low ART, but some have very high ART, and it’s useful to show performance at that lower ART load point, which is becoming the ART standard for All-Flash systems. 1ms seems to be a good point for multi-function SSD systems (vs simpler, smaller but more speed-optimized architectures like the NetApp EF560).

The way you determine the 1ms ART load point is by looking at the table that shows ART vs SPC-1 IOPS. Let’s pick IBM’s 780 since it has a very interesting curve so you learn what to look for.

From page 5 of the IBM Power Server 780 SPC-1 Executive Summary:


IBM’s submitted SPC-1 IOPS are high but at a huge ART number for an all-SSD solution (18.90ms). Not very useful for customers picking an all-SSD system. Even the next load point, with an average ART of 6.41ms, is high for an all-flash solution.

To more accurately compare this to the rest of the vendors with decent ART, you need to look at the table to find the closest load point around 1ms (which, in this case, it’s the 10% load point at 0.71ms – the next one up is much higher at 2.65ms).

You can do a similar exercise for the rest, it’s worth a look – I don’t want to paste all these tables and graphs since this post will get too big. But it’s interesting to see how SPC-1 IOPS vs ART are related and translate that to your business requirements for application latency.

Here’s the table with the current Top Ten SPC-1 Performance results as of 4/22/2015. Click on it for a clearer picture, there’s a lot going on.


Key for the chart (the non-obvious parts anyway):

  • The “SPC-1 Load Level near 1ms” is the load point in each SPC-1 Report that corresponds to the SPC-1 IOPS achieved near 1ms. This is not how busy each array was (I see this misinterpreted all the time).
  • The “Total ASU Capacity” is the amount of capacity the test consumed.
  • The “Physical Storage Capacity” is the total amount of capacity in the array before RAID etc.
  • “Application Utilization” is ASU Capacity divided by Physical Storage Capacity.

What do the results show?

Predictably, all-flash systems trump disk-based and hybrid systems for performance and can offer very nice $/SPC-1 IOPS numbers. That is the major allure of flash – high performance density.

Some takeaways from the comparison:

  • Based on SPC-1 IOPs around 1ms Average Response Time load points, the FAS8080 EX shifts from 5th place to 3rd
  • The other vendors used RAID10 – NetApp used RAID-DP (similar to RAID6 in protection). What would happen to their results if they switched to RAID6 to provide a similar level of protection and efficiency?
  • Aside from the NetApp FAS result, the rest of the Top Ten Performance submissions offer vastly lower Application Utilization – about half! Which means that NetApp is able to use 2x the capacity vs raw compared to the other submissions. And that’s before starting to count the possible storage efficiencies we can turn on like dedupe and compression.

How does one pick a flash array?

It depends. What are you trying to do? Solve a tactical problem? Just need a lot of extra speed and far lower latency for some workloads? No need for the array to have a ton of functionality? A lot of the data management happens in the application? Need something cost-effective, simple yet reliable? Then an all-flash system like the NetApp EF560 is a solid answer, and it can still be front-ended by a Clustered Data ONTAP system to provide more functionality if the need arises in the future (we are firm believers in hardware reuse and investment protection – you see, some companies talk about Software Defined Storage, we do Software Defined Storage).

On the other hand, if you would prefer an Enterprise architecture that can serve as the cornerstone of your datacenter for almost any workload and protocol, offers rich data management functionality and tight application integration, insane scalability, non-disruptive everything and offers the most features (reliably) compared to any other platform – then the FAS line running Clustered Data ONTAP is the only possible answer.

Couple that with OnCommand Insight – the best multivendor fabric management tool on the planet – plus Workflow Automation, and we’ve got you covered.

In summary – the all-flash FAS8080EX gets a pretty amazing performance and efficiency SPC-1 result, especially given the extensive portfolio of functionality it offers. In my opinion, no competitor system offers the sheer functionality the FAS8080 does – not even close. Additionally, I believe that certain competitors have very questionable viability and/or tiny market penetration, making them a risky proposition for a high end system purchase.



Technorati Tags: , , , , , , , ,

Marketing fun: NetApp industry first of up to 13 million IOPS in a single rack

I’m seeing some really “out there” marketing lately, every vendor (including us) trying to find an angle that sounds exciting while not being an outright lie (most of the time).

A competitor recently claimed an industry first of up to 1.7 million (undefined type) IOPS in a single rack.

The number (which admittedly sounds solid), got me thinking. Was the “industry first” that nobody else did up to 1.7 million IOPS in a single rack?

Would that statement also be true if someone else did up to 5 million IOPS in a rack?

I think that, in the world of marketing, it would – since the faster vendor doesn’t do up to 1.7 million IOPS in a rack, they do up to 5! It’s all about standing out in some way.

Well – let’s have some fun.

I can stuff 21x EF560 systems in a single rack.

Each of those systems can do 650,000 random 4K reads at a stable 800 microseconds (since I like defining my performance stats), 600,000 random 8K reads at under 1ms, and over 300,000 random 32KB reads at under 1ms. Also each system can do 12GB/s of large block sequential reads. This is sustained I/O straight from the SSDs and not RAM cache (the I/O from cache can of course be higher but let’s not count that).

See here for the document showing some of the performance numbers.

Well – some simple math shows a standard 42U rack fully populated with EF560 will do the following:

  • 13,650,000 IOPS.
  • 252GB/s throughput.
  • Up to 548TB of usable SSD capacity using DDP protection (up to 639TB with RAID5).

Not half bad.

Doesn’t quite roll off the tongue though – industry first of up to thirteen million six hundred and fifty thousand IOPS in a single rack. :)

I hope rounding down to 13 million is OK with everyone.



Technorati Tags: , , ,