I got the idea for this post from a Twitter thread. I thought such discussions were dead but clearly they’re not, and decided to shed some light on this, having dealt with backup at insane scale in a previous life.
It doesn’t matter what a feature is called – can you use it to recover? And, if the answer is yes, how quickly and under which scenarios? And what are the downsides?
Continue reading “Are Snapshots backups? And what do you need to protect against?”
Recently, HPE Nimble released new systems (the 20/40/60/80 line, replacing the 1000/3000/5000/7000/9000 ones).
I don’t cover press releases – you can find this elsewhere. I’d rather talk about the cool stuff.
Continue reading “New HPE Nimble Systems are Ready for SCM and NVMe”
It recently came to my attention that Dell is now advertising some kind of benchmark that shows one of their platforms can be faster than Nimble in some very specific test of their own concoction.
While I don’t doubt that’s possible (indeed, we could do it the other way around), it may be worthwhile investigating what’s prompting the attack.
I also wanted to point out the various technically fishy points of the benchmark.
Continue reading “When Terrified Vendors Attack: The Dell Edition”
I got the idea for this post after seeing certain vendors claim they were the first and only with certain data reduction technologies (I’m not talking about dedupe and compression). I thought – how come Nimble never made a big deal about this? After all, what those vendors were claiming didn’t seem to be very interesting compared to how Nimble systems efficiently write data… yet those vendors were acting as if they’d cured a particularly virulent disease.
Nimble systems naturally avoid any wasted space when writing. This is an inherent part of the design and not something that was added later.
Continue reading “How Nimble Storage Systems do Block Folding”
Just a quick post to address something many people either get wrong or just live with due to convenience.
In summary: Please, let’s stop using average I/O sizes to characterize storage system performance. It’s wrong and doesn’t describe how the real world works. Using an average number is as bad as using small block 100% read numbers shown in vanity benchmarks. Neither is representative of real life.
Using a single I/O size for benchmarking became a practice for a vanity benchmark and to provide a level playing field to compare multiple products.
But, ultimately, even though the goal of comparing different systems is desirable, using a single I/O size is fundamentally flawed.
Continue reading “Why it is Incorrect to use Average Block Size for Storage Performance Benchmarking”