The driver behind this has been to transform application performance, not by increments but by leaps and bounds. Think orders of magnitude in reduction of execution time. For instance, at an organization performing Alzheimer’s cure research, they had a certain key analytics operation that took 22 minutes for each iteration (and they need to do many, many iterations). With a Memory-Driven system from HPE it now takes 13 seconds. This allows the researchers to reach useful results much faster – which, in turn, means the cure could materialize in a much shorter timeframe.
In this era of over-marketing and misinformation, it can be refreshing to clarify things for customers.
Allow me to be refreshing regarding NVMe 🙂
NVMe is simply a protocol. Just like SCSI is a protocol. NVMe is most assuredly not a media type. Yet, storage vendors keep talking about “NVMe drives” and customers often think those devices are equal as long as “NVMe” is mentioned.
Alas, that’s not how things work…
Strictly speaking, there’s no such thing as an NVMe drive. Or, at the very least, calling something an “NVMe drive” isn’t enough to describe what that media is, and it’s especially not enough to describe how fast it may be.
Recently, HPE Nimble released new systems (the 20/40/60/80 line, replacing the 1000/3000/5000/7000/9000 ones).
I don’t cover press releases – you can find this elsewhere. I’d rather talk about the cool stuff.
It recently came to my attention that Dell is now advertising some kind of benchmark that shows one of their platforms can be faster than Nimble in some very specific test of their own concoction.
While I don’t doubt that’s possible (indeed, we could do it the other way around), it may be worthwhile investigating what’s prompting the attack.
I also wanted to point out the various technically fishy points of the benchmark.
Just a quick post to address something many people either get wrong or just live with due to convenience.
In summary: Please, let’s stop using average I/O sizes to characterize storage system performance. It’s wrong and doesn’t describe how the real world works. Using an average number is as bad as using small block 100% read numbers shown in vanity benchmarks. Neither is representative of real life.
Using a single I/O size for benchmarking became a practice for a vanity benchmark and to provide a level playing field to compare multiple products.
But, ultimately, even though the goal of comparing different systems is desirable, using a single I/O size is fundamentally flawed.