In this post I will examine the effects of benchmarking highly compressible data and why that’s potentially a bad idea.
Compression is not a new storage feature. Of the large storage vendors, at a minimum NetApp, EMC and IBM can do it (depending on the array). <EDIT (thanks to Matt Davis for reminding me): Some arrays also do zero detection and will not write zeroes to disk – think of it as a specialized form of compression that ONLY works on zeroes>
A lot of the newer storage vendors are now touting real-time compression for all data (often used instead of true deduplication – it’s just easier to implement compression).
Nothing wrong with real-time compression. However, and here’s where I have a problem with some of the sales approaches some vendors follow:
Real-time compression can provide grossly unrealistic benchmark results if the benchmarks used are highly compressible!
Compression can indeed provide a performance benefit for various data types (simply since less data has to be read and written from disk), with the tradeoff being CPU. However, most normal data isn’t composed of all zeroes. Typically, compressing data will provide a decent benefit on average, but usually not several times.
So, what will typically happen is, a vendor will drop off one of their storage appliances and provide the prospect with some instructions on how to benchmark it with your garden variety benchmark apps. Nothing crazy.
Here’s the benchmark problem
A lot of the popular benchmarks just write zeroes. Which of course are extremely easy for compression and zero-detect algorithms to deal with and get amazing efficiency out of, resulting in extremely high benchmark performance.
I wanted to prove this out in an easy way that anyone can replicate with free tools. So I installed Fedora 18 with the btrfs filesystem and ran the bonnie++ benchmark with and without compression. The raw data with mount options etc. is here. An explanation of the various fields here. Not everything is accelerated by btrfs compression in the bonnie++ benchmark, but a few things really are (sequential writes, rewrites and reads):
Notice the gigantic improvement (in write throughput especially) btrfs compression affords with all-zero data.
Now, does anyone think that, in general, the write throughput will be 300MB/s for a decrepit 5400 RPM SATA disk? That will be impossible unless the user is constantly writing all-zero data, at which point the bottlenecks lie elsewhere.
Some easy ways for dealing with the compressible benchmark issue
So what can you do in order to ensure you get a more realistic test for your data? Here are some ideas:
- Always best is to use your own applications and not benchmarks. This is of course more time-consuming and a bigger commitment. If you cant do that, then…
- Create your own test data using, for example, dd and /dev/random as a source in some sort of Unix/Linux variant. Some instructions here. You can even move that data to use with Windows and IOmeter – just generate the random test data in UNIX-land and move the file(s) to Windows.
- Another, far more realistic way: Use your own data. In IOmeter, you just copy one of your large DB files to iobw.tst and IOmeter will use your own data to test… Just make sure it’s large enough and doesn’t all fit in array cache. If not large enough, you could probably make it large enough by concatenating multiple data files and random data together.
- Use a tool that generates incompressible data automatically, like the AS-SSD benchmark (though it doesn’t look like it can deal with multi-TB stuff – but worth a try).
- vdbench seems to be a very solid benchmark tool – with tunable compression and dedupe settings.
In all cases though, be aware of how you are testing. There is no magic