I got the idea for this blog after speaking with multiple customers that were contemplating switching to certain kinds of grid computing/storage (like HCI) without fully understanding the ramifications of doing so.
You see, they were (rightly so) enamored by concepts such as automation, ease of consumption and scaling. But they forgot to ask some very important questions. See here for the dangers of getting too carried away with something new and taking things for granted.
This isn’t a post claiming HCI and grid-type storage constructs are bad. Like any tool, they can be used in various ways, some of them aggressively ill-advised. The point of this post is to help customers ask for the right configuration so they don’t get stuck with a sub-optimal and risky design.
I tried to make this post as short as possible but as someone once said, “Everything should be made as simple as possible, but not simpler”. Which, ironically, is a simplification of what Einstein actually said 🙂
Continue reading “HCI Failure Modes and Maintenance”
I’ve heard so many arguments about lock-in from different people, and I’ve seen people do all kinds of crazy things to avoid it.
In my mind it’s pretty simple:
Lock-in is unavoidable.
Continue reading “Trying to Avoid Lock-In Mostly Leads to Just Different Lock-In – Worry About Business Outcomes Instead”
I posit that we now have a whole new class of consumer that is completely oblivious to certain hitherto fundamental concepts – and this can lead to poor business decisions and overall sub-optimal execution and results.
I got the idea after a discussion with an ex colleague (that’s now working for a cloud vendor) where he proudly proclaimed that infrastructure is unimportant and uninteresting.
I’ll start generically and shift to IT. The generic aspect of this problem is very interesting, since it’s lowering quality in all sorts of fields.
And never forget: Just because something is widely and easily available doesn’t mean it’s better. It simply means that more people have access to it.
Continue reading “The Loss of Important Knowledge and Acumen Through Perceived Commoditization”
It seems there’s no shortage of creative marketing ways to fool customers these days.
I’ve written before about how to protect against some of the shadier storage business “guarantee” schemes, but a relatively new development has prompted this post.
You see, a rather large vendor-who-shall-not-be-named is now offering a capacity guarantee across their entire storage portfolio, shown right on their website. The interesting part is that their guarantee mentions not just thin provisioning but also snapshots.
Many vendors use thin provisioning in capacity guarantees – it doesn’t make the practice acceptable, merely common like influenza or selfies, only more annoying.
However, no other vendor I’m aware of has the sheer audacity to also count snapshots in their capacity guarantee.
Continue reading “Lying via Capacity Guarantees”
HPE has been innovating in the Memory-Driven Compute space for a while now (for example, HPE Labs’ The Machine project and Gen-Z ).
The driver behind this has been to transform application performance, not by increments but by leaps and bounds. Think orders of magnitude in reduction of execution time. For instance, at an organization performing Alzheimer’s cure research, they had a certain key analytics operation that took 22 minutes for each iteration (and they need to do many, many iterations). With a Memory-Driven system from HPE it now takes 13 seconds. This allows the researchers to reach useful results much faster – which, in turn, means the cure could materialize in a much shorter timeframe.
Continue reading “HPE Memory-Driven Architectures Extend to 3PAR and Nimble Storage”