This one is dedicated to an old friend that’s been asking me to write about certain technologies like a very specific filesystem and whether it should be used as the basis for enterprise storage.
He loves using open source stuff for business, mostly to save money, but also for the control it affords and the sheer pleasure of tinkering (and, I suspect, a modicum of masochistic proclivity).
This isn’t a post against open source (if nothing else, that would be utterly hypocritical since most commercial stuff is at least partially based on open source software).
It’s more about risk mitigation and TCO.
Continue reading “Will your Infrastructure Survive if you Disappear?”
Just a quick post since I’m getting inquiries regarding the status of my vital signs.
Yes, I’m alive, just very busy helping with the ongoing integration of Nimble Storage into HPE and all kinds of cool AI.
All kinds of training to be done all around, positioning guides to be written, strategy to plan, technology to design.
Continue reading “Status and the Nimble acquisition by HPE”
In past articles I have covered topics about how Nimble Storage lowers customer risk with technologies internal to the array such as:
Now, it’s time to write a more in-depth article about the immense customer benefits of InfoSight, Nimble’s Predictive Analytics platform.
Continue reading “InfoSight: How Nimble Customers Benefit from AI and Big Data Predictive Analytics”
As of February 27th 2017, Nimble Storage announced Nimble Cloud Volumes (NCV), marking Nimble’s entry into the cloud space.
For those short on time: Nimble Cloud Volumes is block storage as a service, works with compute from AWS & Azure, and avoids the significant block storage drawbacks of those two providers. In essence, it is enterprise storage as a service, at a competitive price, while retaining all the cloud conveniences.
Continue reading “Nimble Cloud Volumes Removing Risk From Cloud Storage”
I got the idea for this stream-of-consciousness (and possibly too obvious) post after reading some comments regarding new ways to do high speed I/O. Not something boring like faster media or protocols, but rather far more exotic approaches that require drastically rewriting applications to get the maximum benefits of radically different architectures.
The comments, in essence, stated that such advancements would be an utter waste of time since 99% of customers have absolutely no need for such exotica, but rather just need cheap, easy, reliable stuff, and can barely afford to buy infrastructure to begin with, let alone custom-code applications to get every last microsecond of latency out of gear.
Continue reading “Progress Needs Trailblazers”