When Terrified Vendors Attack: The Dell Edition

It recently came to my attention that Dell is now advertising some kind of benchmark that shows one of their platforms can be faster than Nimble in some very specific test of their own concoction.

While I don’t doubt that’s possible (indeed, we could do it the other way around), it may be worthwhile investigating what’s prompting the attack.

I also wanted to point out the various technically fishy points of the benchmark.

Continue reading “When Terrified Vendors Attack: The Dell Edition”

Why it is Incorrect to use Average Block Size for Storage Performance Benchmarking

Just a quick post to address something many people either get wrong or just live with due to convenience.

In summary: Please, let’s stop using average I/O sizes to characterize storage system performance. It’s wrong and doesn’t describe how the real world works. Using an average number is as bad as using small block 100% read numbers shown in vanity benchmarks. Neither is representative of real life.

Using a single I/O size for benchmarking became a practice for a vanity benchmark and to provide a level playing field to compare multiple products.

But, ultimately, even though the goal of comparing different systems is desirable, using a single I/O size is fundamentally flawed.

Continue reading “Why it is Incorrect to use Average Block Size for Storage Performance Benchmarking”

Nimble Cloud Volumes Removing Risk From Cloud Storage

As of February 27th 2017, Nimble Storage announced Nimble Cloud Volumes (NCV), marking Nimble’s entry into the cloud space.

For those short on time: Nimble Cloud Volumes is block storage as a service, works with compute from AWS & Azure, and avoids the significant block storage drawbacks of those two providers. In essence, it is enterprise storage as a service, at a competitive price, while retaining all the cloud conveniences.

Continue reading “Nimble Cloud Volumes Removing Risk From Cloud Storage”

Progress Needs Trailblazers

I got the idea for this stream-of-consciousness (and possibly too obvious) post after reading some comments regarding new ways to do high speed I/O. Not something boring like faster media or protocols, but rather far more exotic approaches that require drastically rewriting applications to get the maximum benefits of radically different architectures.

The comments, in essence, stated that such advancements would be an utter waste of time since 99% of customers have absolutely no need for such exotica, but rather just need cheap, easy, reliable stuff, and can barely afford to buy infrastructure to begin with, let alone custom-code applications to get every last microsecond of latency out of gear.

Continue reading “Progress Needs Trailblazers”