I got the idea for this post after seeing certain vendors claim they were the first and only with certain data reduction technologies (I’m not talking about dedupe and compression). I thought – how come Nimble never made a big deal about this? After all, what those vendors were claiming didn’t seem to be very interesting compared to how Nimble systems efficiently write data… yet those vendors were acting as if they’d cured a particularly virulent disease.
Nimble systems naturally avoid any wasted space when writing. This is an inherent part of the design and not something that was added later.
Continue reading “How Nimble Storage Systems do Block Folding”
In modern storage devices (especially All Flash Arrays), extensive data reduction techniques are commonplace and expected by customers.
This has, unavoidably, led to various marketing schemes that aim to make certain systems seem more appealing than the rest. Or at least not less appealing…
I will attempt to explain what customers should be looking for when trying to decipher capacity claims from a manufacturer.
In a nutshell – and for the ADD-afflicted – the most important number you should be looking for is the Effective Capacity Ratio, which is simply: (Effective Capacity)/(Raw Capacity). Ignore the commonly quoted but far less useful Data Reduction Ratio, which is: (Effective Capacity)/(Usable Capacity).
Ultimately, as a customer you shouldn’t even care about ratios. Only about the true effective capacity you can safely use.
Important: Any time you see any vendor quoting a ratio, they always quote the data reduction ratio. So, it is crucial that you don’t calculate effective capacity by multiplying that ratio times raw, but rather times usable. I had to edit the article to add this since I recently spoke to a customer that was making this grave mistake (various vendors only gave them the raw capacity and the data reduction ratio!)
Continue reading “The Importance of the Effective Capacity Ratio in Modern Storage Systems”
In this post I will try to help you understand how to objectively calculate the cost of space-efficient storage solutions – there’s just too much misinformation out there and it’s getting irritating since certain vendors aren’t exactly honest with how they do certain calculations…
Continue reading “Calculating the true cost of space efficient Flash solutions”
In this post I will examine the effects of benchmarking highly compressible data and why that’s potentially a bad idea.
Compression is not a new storage feature. Of the large storage vendors, at a minimum HPE, NetApp, EMC and IBM can do it (depending on the array). <EDIT (thanks to Matt Davis for reminding me): Some arrays also do zero detection and will not write zeroes to disk – think of it as a specialized form of compression that ONLY works on zeroes>
Continue reading “Beware of benchmarking storage that does inline compression”