The Loss of Important Knowledge and Acumen Through Perceived Commoditization

I posit that we now have a whole new class of consumer that is completely oblivious to certain hitherto fundamental concepts – and this can lead to poor business decisions and overall sub-optimal execution and results.

I got the idea after a discussion with an ex colleague (that’s now working for a cloud vendor) where he proudly proclaimed that infrastructure is unimportant and uninteresting.

I’ll start generically and shift to IT. The generic aspect of this problem is very interesting, since it’s lowering quality in all sorts of fields.

And never forget: Just because something is widely and easily available doesn’t mean it’s better. It simply means that more people have access to it.

Continue reading “The Loss of Important Knowledge and Acumen Through Perceived Commoditization”

How Nimble Storage Systems do Block Folding

I got the idea for this post after seeing certain vendors claim they were the first and only with certain data reduction technologies (I’m not talking about dedupe and compression). I thought – how come Nimble never made a big deal about this? After all, what those vendors were claiming didn’t seem to be very interesting compared to how Nimble systems efficiently write data… yet those vendors were acting as if they’d cured a particularly virulent disease.

Executive Summary

Nimble systems naturally avoid any wasted space when writing. This is an inherent part of the design and not something that was added later.

Continue reading “How Nimble Storage Systems do Block Folding”

Why it is Incorrect to use Average Block Size for Storage Performance Benchmarking

Just a quick post to address something many people either get wrong or just live with due to convenience.

In summary: Please, let’s stop using average I/O sizes to characterize storage system performance. It’s wrong and doesn’t describe how the real world works. Using an average number is as bad as using small block 100% read numbers shown in vanity benchmarks. Neither is representative of real life.

Using a single I/O size for benchmarking became a practice for a vanity benchmark and to provide a level playing field to compare multiple products.

But, ultimately, even though the goal of comparing different systems is desirable, using a single I/O size is fundamentally flawed.

Continue reading “Why it is Incorrect to use Average Block Size for Storage Performance Benchmarking”

InfoSight: How Nimble Customers Benefit from AI and Big Data Predictive Analytics

In past articles I have covered topics about how Nimble Storage lowers customer risk with technologies internal to the array such as:

Now, it’s time to write a more in-depth article about the immense customer benefits of InfoSight, Nimble’s Predictive Analytics platform.

Continue reading “InfoSight: How Nimble Customers Benefit from AI and Big Data Predictive Analytics”

Progress Needs Trailblazers

I got the idea for this stream-of-consciousness (and possibly too obvious) post after reading some comments regarding new ways to do high speed I/O. Not something boring like faster media or protocols, but rather far more exotic approaches that require drastically rewriting applications to get the maximum benefits of radically different architectures.

The comments, in essence, stated that such advancements would be an utter waste of time since 99% of customers have absolutely no need for such exotica, but rather just need cheap, easy, reliable stuff, and can barely afford to buy infrastructure to begin with, let alone custom-code applications to get every last microsecond of latency out of gear.

Continue reading “Progress Needs Trailblazers”