The Importance of the Effective Capacity Ratio in Modern Storage Systems

In modern storage devices (especially All Flash Arrays), extensive data reduction techniques are commonplace and expected by customers.

This has, unavoidably, led to various marketing schemes that aim to make certain systems seem more appealing than the rest. Or at least not less appealing…

I will attempt to explain what customers should be looking for when trying to decipher capacity claims from a manufacturer.

In a nutshell – and for the ADD-afflicted – the most important number you should be looking for is the Effective Capacity Ratio, which is simply: (Effective Capacity)/(Raw Capacity). Ignore the more common but far less useful Data Reduction Ratio, which is: (Effective Capacity)/(Usable Capacity).

Read on for more detail…

The problem

Most modern (and some not so modern) storage vendors claim a similar Data Reduction Ratio for their devices. Numbers like 5:1 seem commonplace these days. Which, to anyone with a modicum of common sense, simply means “I can store five times more stuff in the same space”. Unfortunately, the same 5:1 ratio does not mean the same end result for all vendors…

Why this is a problem

Customers simply want to get a good deal for their money. The reality is that the exact same 5:1 Data Reduction Ratio may mean very different things between storage devices.

For instance: Not all vendors count space savings using the same math. Is the virtual size of snapshots calculated in the final savings ratio? (for example, if I take 5 snaps of a 1TB volume one after another, is that showing up as 6:1 savings?)

How about Thin Provisioning? Is that included? Such math can wildly alter the overall efficiency numbers.

Here’s a clear example of why counting thin provisioning in the overall ratio is probably misleading, and best used only for educational purposes… Remember that overall savings ratios are multiplicative. For example, 2:1 compression and 2:1 deduplication mean 4:1 overall savings:

Thin Craziness

Does anyone really believe that such a system is actually providing 1000:1 savings? Or even 10:1? Yet that’s what many vendors are counting towards savings ratios, without separating the thin provisioning savings from the overall savings number.

However, there is another dimension, and it has to do with how efficiently the raw capacity is actually utilized. It will be one of my many Captain Obvious moments for some of you, but I’ve seen enough people get confused, so it’s worth explaining.

The Effective Capacity Ratio

Different storage systems have different ways of utilizing their raw capacity. For example, a mirrored system can never have better than a 50% Usable:Raw ratio. By definition, it’s mathematically impossible since 2 copies are needed. That’s not even counting spares and other possible overheads.

Systems that do triple mirroring can’t do better than a theoretical 33.3% Usable:Raw etc.

Some definitions are in order:

  • Usable Capacity: How much data I can store in a system after overheads such as RAID, sparing etc. but before data reduction techniques.
  • Raw Capacity: Add up the capacity of all the storage media in the system. Usually a Base 10 number (TB/GB, not Base 2, TiB/GiB)
  • Effective Capacity: How much data I can store in a system after data reduction techniques like Deduplication and Compression but not Thin Provisioning
  • Data Reduction Ratio: (Effective Capacity)/(Usable Capacity)
  • Effective Capacity Ratio: (Effective Capacity)/(Raw Capacity)

Who is More Efficient?

If every vendor is claiming 5:1 average savings, who is truly more efficient? A Reductio ad Absurdum example makes it pretty clear:

  1. A vendor that can do a Data Reduction Ratio of 5:1 but has a Usable Capacity of 10% vs Raw or…
  2. A vendor that can do a Data Reduction Ratio of 5:1 but has a Usable Capacity of 70% vs Raw?

Let’s put some numbers on a table. They roughly correspond to some existing storage vendors today (there may be some variation depending on whether the numbers for each line are TB vs TiB – everyone shows numbers differently – but the overall point remains the same):

Effective Cap Ratio

As you can see, the same Data Reduction Ratio, on the same amount of Raw Capacity, can have wildly different Effective Capacity results, depending on the system.

Clearly, a truly efficient system is one that can both:

  • Provide a high Usable:Raw ratio and
  • Provide a high Data Reduction Ratio (that does not include fluff like Thin Provisioning)

The Business Benefits of a High Effective Capacity Ratio

There are multiple business reasons why chasing a high Effective Capacity Ratio is important:

  • High rack density. Some vendors are eight times more dense than others (ironically, the ones claiming “white box economics” are the least dense and most expensive). This could lead to significant power/cooling and $/rack savings
  • Lower CapEx – if a vendor needs 2x the Raw Capacity vs another, guess who’s paying for the difference?
  • Less complexity – denser devices means less devices, less cables, less switching

What you can do as a Customer

Level the playing field. It’s actually pretty easy:

  • Insist on seeing all capacity numbers expressed as TiB from every vendor – and if they don’t know what that means, run… (or at least subtract 9% from any capacity number you see and you’ll be safer)
  • Insist on seeing the Data Reduction Ratio without including Thin Provisioning or Snapshots. If they can’t break out the savings by category, run
  • Insist on seeing the Usable:Raw percentage for various configurations. Is it better in larger configs vs smaller? Is it following all best practices? What are the best practices for space consumption? If they tell you to ignore the man behind the curtain, run… (there is a theme developing)
  • Do your own Effective Capacity Ratio calculation and assign a $/Effective TiB to each vendor

D

Technorati Tags: , , , , , ,

Calculating the true cost of space efficient Flash solutions

In this post I will try to help you understand how to objectively calculate the cost of space-efficient storage solutions – there’s just too much misinformation out there and it’s getting irritating since certain vendors aren’t exactly honest with how they do certain calculations…

A brief history lesson:

The faster a storage device, the smaller and more expensive it usually is. Flash was initially insanely expensive relative to spinning disk, so it was used in small amounts, typically as a tier and/or cache augmentation.

And so it came to be that flash-based storage systems started implementing some of the more interesting space efficiency techniques around. Interesting because it’s algorithmically easy to reduce data dramatically, but hard to do under high load while maintaining impressive IOPS and low latency.

Space efficiencies plus lower flash media costs bring us to today’s ability to use all-flash storage in ever-increasingly cost-effective amounts.

But how does one figure out the best deal?

There are some factors I won’t get into in this article. Company size and viability, support staff strength, maturity of the code, automation, overall features etc. all may play a huge role depending on the environment and requirements (and, indeed, will often eliminate several of the players from further consideration). However, I want to focus on the basics.

Recommended metric: Cost per effective TB

It’s easy to get lost in the hype. One company says they reduce by 3:1, another might say 5:1, yet another 10:1, etc. The high efficiency ratios seem to be attractive, right?

Well – you’re not paying for a high efficiency ratio. What you are paying for is for usable capacity.

If all solutions cost the same, the systems with high efficiency ratios would win this battle every day of the week and twice on Sundays.

However, solutions don’t all cost the same. Ask your vendor what the projected effective capacity will be for each specific configuration, and the Cost/Effective TB is a trivial calculation.

But there’s one more thing to do in order for the calculation to be correct:

Insist on calculating the efficiency ratio yourself.

Most storage systems will show a nice picture in the GUI with an overall efficiency ratio. Looks nice and easy. Well – the devil is in the details.

If a vendor is upfront about how they measure efficiency, your numbers might make sense.

This is where you trust but verify. Some pointers:

  • Take a note of the initial usable space before putting anything on the system.
  • If you store a 1TB DB and do nothing else to the data, what’s the efficiency?
  • Calculate the ratio yourself! Divide the amount of capacity the data is taking in the OS by the amount it’s taking on the storage.
  • Does the number make sense given the size of the data you just put on the system and how much usable space is left now?
  • If you take 10 snapshots of the data, what’s the efficiency? How about if you delete the snaps, does the efficiency change?
  • If you take a clone of the DB, what’s the efficiency?
  • If you delete the clone you just took, what’s the efficiency?
  • Create a large LUN (10TB for example) and only store 1TB of data in it. What’s the efficiency? Do you count thin provisioning as data reduction?
  • Does this all add up if you do the math manually instead of the GUI doing it for you?
  • Does it all meet your expectations? For example, if a vendor is claiming 5:1 reduction, can you actually store 5 different DBs in the space of one? Or do they really mean something else? That’s a pretty easy test…

You see, most vendors count savings a bit differently. In the examples above, that 1TB DB, if stored in a 10TB LUN, and cloned 10 times, will probably result in a very high efficiency number. It doesn’t mean however that 10 different DBs of the same size would have nearly the same efficiency ratio.

If you don’t have time to do a test in-house, have the vendor prove their claims and show how they do their math in their labs while you watch. You will typically find that each data type has a wildly different space efficiency ratio.

The bottom line

It’s pretty easy. Figure out the efficiency ratio on your own based on how you expect to use the system, then plug that ratio into the Price/Effective TB formula like so:

Real Cost per TB = Price/(Usable TB * Real Efficiency Ratio as a multiplier)

And, finally, a word on capacity guarantees:

Some vendors will guarantee capacity efficiencies. Always, always demand to see the fine print. If a vendor insists they will guarantee x:1 efficiency, have them sign an official legally binding agreement that has the backing of the vendor’s HQ (and isn’t some desperate local sales office ploy that might not be worth the paper it’s printed on).

Insist the guarantee states you will get that claimed efficiency no matter what you’re storing on the box.

Notice how quickly the small print will come 🙂

D

Technorati Tags: , , , ,

Beware of benchmarking storage that does inline compression

In this post I will examine the effects of benchmarking highly compressible data and why that’s potentially a bad idea.

Compression is not a new storage feature. Of the large storage vendors, at a minimum NetApp, EMC and IBM can do it (depending on the array). <EDIT (thanks to Matt Davis for reminding me): Some arrays also do zero detection and will not write zeroes to disk – think of it as a specialized form of compression that ONLY works on zeroes>

A lot of the newer storage vendors are now touting real-time compression for all data (often used instead of true deduplication – it’s just easier to implement compression).

Nothing wrong with real-time compression. However, and here’s where I have a problem with some of the sales approaches some vendors follow:

Real-time compression can provide grossly unrealistic benchmark results if the benchmarks used are highly compressible!

Compression can indeed provide a performance benefit for various data types (simply since less data has to be read and written from disk), with the tradeoff being CPU. However, most normal data isn’t composed of all zeroes. Typically, compressing data will provide a decent benefit on average, but usually not several times.

So, what will typically happen is, a vendor will drop off one of their storage appliances and provide the prospect with some instructions on how to benchmark it with your garden variety benchmark apps. Nothing crazy.

Here’s the benchmark problem

A lot of the popular benchmarks just write zeroes. Which of course are extremely easy for compression and zero-detect algorithms to deal with and get amazing efficiency out of, resulting in extremely high benchmark performance.

I wanted to prove this out in an easy way that anyone can replicate with free tools. So I installed Fedora 18 with the btrfs filesystem and ran the bonnie++ benchmark with and without compression. The raw data with mount options etc. is here. An explanation of the various fields here. Not everything is accelerated by btrfs compression in the bonnie++ benchmark, but a few things really are (sequential writes, rewrites and reads):

Bonniebtrfs

Notice the gigantic improvement (in write throughput especially) btrfs compression affords with all-zero data.

Now, does anyone think that, in general, the write throughput will be 300MB/s for a decrepit 5400 RPM SATA disk?  That will be impossible unless the user is constantly writing all-zero data, at which point the bottlenecks lie elsewhere.

Some easy ways for dealing with the compressible benchmark issue

So what can you do in order to ensure you get a more realistic test for your data? Here are some ideas:

  • Always best is to use your own applications and not benchmarks. This is of course more time-consuming and a bigger commitment. If you cant do that, then…
  • Create your own test data using, for example, dd and /dev/random as a source in some sort of Unix/Linux variant. Some instructions here. You can even move that data to use with Windows and IOmeter – just generate the random test data in UNIX-land and move the file(s) to Windows.
  • Another, far more realistic way: Use your own data. In IOmeter, you just copy one of your large DB files to iobw.tst and IOmeter will use your own data to test… Just make sure it’s large enough and doesn’t all fit in array cache. If not large enough, you could probably make it large enough by concatenating multiple data files and random data together.
  • Use a tool that generates incompressible data automatically, like the AS-SSD benchmark (though it doesn’t look like it can deal with multi-TB stuff – but worth a try).
  • vdbench seems to be a very solid benchmark tool – with tunable compression and dedupe settings.
  • And don’t forget the obvious but often forgotten rule: never test with a data set that fits entirely in RAM!

 

In all cases though, be aware of how you are testing. There is no magic 🙂

D

 

Technorati Tags: , ,