Nimble Cloud Volumes Removing Risk From Cloud Storage

As of February 27th 2017, Nimble Storage announced Nimble Cloud Volumes (NCV), marking Nimble’s entry into the cloud space.

For those short on time: Nimble Cloud Volumes is block storage as a service, works with compute from AWS & Azure, and avoids the significant block storage drawbacks of those two providers. In essence, it is enterprise storage as a service, at a competitive price, while retaining all the cloud conveniences.

NCV overview

Why get in the Storage as a Service space?

People like subscription-based cloud offerings for various reasons. Mostly due to convenience, agility and economies of scale. Plus, of course, converting all expenses into OPEX. But this is not a treatise on “why cloud”.

Currently, the two largest hyperscale cloud providers are AWS and Azure. While both are great offerings overall, their block storage component is severely lacking in several key aspects compared to enterprise-grade storage.

This is something that most people either are unaware of, or simply choose to accept the risk since they see the other benefits as compelling.

Knowledge is power.

Here are some of the downsides of AWS EBS and Azure block storage:

  • Availability SLA: 3 nines (if an enterprise array vendor mentioned 3 nines, you’d kick them out immediately)
  • Data integrity: 0.1-0.2% Annualized Failure Rate (AFR). 1-2 out of 1,000 volumes will suffer from partial or complete data loss (this is for EBS, Azure numbers N/A). If an enterprise storage vendor said they’d lose your data with a 0.2% chance per year, you’d permanently and with extreme prejudice put them on the “banned” list…
  • Extremely slow (and costly) backups, clones and recoveries (the snap I/O is from/to S3 and the I/O variability accessing that during recoveries can be extremely high)
  • Hard and expensive to switch providers (by design)
  • Hard and expensive to repatriate data back to a private cloud (by design)
  • Significant constraints in some services might force customers to change their application layouts (1TB volume limits in Azure are unacceptably small for many deployments)
  • Data mobility between cloud providers isn’t easy (by design)
  • Troubleshooting & monitoring are hard. How does one do Root Cause Analysis for issues within a cloud provider? They’re designed to be black boxes. What happens when something goes wrong? How long does troubleshooting take? Can I see detailed volume statistics and trending? Why is my application suffering?

Especially for the more critical applications, the above list is enough to give people pause.

Well – what if there was a better way? A way to significantly de-risk using cloud storage? This is the reason Nimble is entering this market. To provide a way to remove some of the more significant challenges of cloud storage while keeping the best aspects of what people like about the cloud.

How did Nimble Storage build the NCV STaaS offering?

Nimble has built its own cloud to deliver the Nimble Cloud Volumes service.

The Nimble cloud is strategically placed around the world in locations in close proximity to AWS and Azure cloud infrastructure, ensuring low-latency access. The Nimble cloud is built using Nimble flash technology for storing data (i.e. it does not use the storage from either AWS or Azure). The service is managed through a simple portal and provides storage volumes that can be attached to Amazon EC2 or Azure Virtual Machine instances.

The Benefits Customers get out of Nimble Cloud Volumes

As previously mentioned, the entire goal of NCV is to keep what’s good about the cloud while at the same time removing some of the more significant risks and inconveniences. Here are the main benefits:

Easy to Consume

NCV is managed completely through an easy portal that lets users choose a capacity, performance and backup SLA, and attach to either AWS and/or Azure.

Fully Managed STaaS Offering

This is what customers expect out of the cloud. NCV is a fully managed service. The customer doesn’t have to ever touch and maintain a storage array, physical or virtualized.

Extreme Reliability

The NCV storage uptime is the same as what Nimble enterprise arrays provide: a measured 6+ nines.

The data integrity is even more impressive at millions of times more than what native cloud block storage provides. More details here.

Enterprise Copy Data Management

Instant snapshots and clones. Instant recoveries. In addition, we only charge for delta usage when making clones, not full copies. Which means you can take many, many clones of something and the initial cost is almost zero. Only pay for the IOPS and additional data you consume as the clones deviate from the original. At scale, the savings can be significant.

Provisioned IOPS, Low Latency plus Auto QoS

Everyone expects an IOPS SLA from a STaaS offering. We do of course offer an IOPS SLA with NCV.

But we also take it to the next level by automatically prioritizing the right I/O using heuristics. That way, even if you are hitting your QoS limit, the right kind of I/O will still get prioritized automatically.

For instance, automatically prioritize latency-sensitive DB I/O vs latency-insensitive sequential I/O. Both because it makes sense and because nobody has the time to deal with micromanaging QoS. More here.

Easy Mobility Eliminates Cloud Lock-In

This is a big one. We make the following things easy and possible:

  1. Cloud on-off ramp: easily and quickly send your data to the cloud. Want to stop using the cloud? That’s also easy and fast. And there are no egress charges.
  2. Multicloud without lock-in: easily use multiple public cloud offerings (and switch between them) without needing to migrate data.
  3. Hybrid Cloud Portability: Easy to mix and match private and public cloud. Easily move data back to on-premises private cloud if needed. Easily burst what you need to the cloud.

Global Visibility with Predictive Analytics

Last, but by no means least: We are extending InfoSight to the cloud. The service that massively lowers business risk for Nimble private cloud customers will now also be available to public cloud consumers.

  • Predict and prevent issues across the stack: Because it’s really about the applications, not just the storage. Pinpoint difficult issues quickly, even if they have nothing whatsoever to do with storage.
  • Visibility no matter where your data is: It shouldn’t matter whether your data is in the public or private cloud or both. Have detailed visibility into your data including exotic things like I/O histograms or more pedestrian (but still useful) things like detailed trending analysis.
  • Predict, recommend, optimize: Use Predictive Analytics to go beyond reporting and use the information to optimize data placement – which could end up saving you money.

Finally, the cloud stops being a black box when problems arise.

In Summary

The Nimble Cloud Volumes service was built to address some significant shortcomings AWS and Azure block storage services have, while retaining the benefits people expect from the cloud and providing much more flexibility to customers.

NCV provide benefits such as increased reliability, predictability, faster backups and clones, plus integration with the best infrastructure analytics platform, InfoSight.

Some example methods of deployment:

  • Production in private cloud, DR in public cloud
  • Test/Dev in public cloud
  • “All In” public cloud

We are accepting beta customers now (several big names already using it). Go here to take a look and sign up.

D

The Importance of SSD Firmware Updates

I wanted to bring this crucial issue to light since I’m noticing several storage vendors being either cavalier about this or simply unaware.

I will explain why solutions that don’t offer some sort of automated, live SSD firmware update mechanism are potentially extremely risky propositions. Yes, this is another “vendor hat off, common sense hat on” type of post.

Modern SSD Architecture is Complex

The increased popularity and lower costs of fast SSD media are good things for storage users, but there is some inherent complexity within each SSD that many people are unaware of.

Each modern SSD is, in essence, an entire pocket-sized storage array, that includes, among other things:

  • An I/O interface to the outside world (often two)
  • A CPU
  • An OS
  • Memory
  • Sometimes Compression and/or Encryption
  • What is, in essence, a log-structured filesystem, complete with complex load balancing and garbage collection algorithms
  • An array of flash chips driven in parallel through multiple channels
  • Some sort of RAID protection for the flash chips, including sparing, parity, error checking and correction…
  • A supercapacitor to safely flush cache to the flash chips in case of power failure.

Sounds familiar?

With Great Power and Complexity Come Bugs

To make something clear: This discussion has nothing to do with overall SSD endurance & hardware reliability. Only the software aspect of the devices.

All this extra complexity in modern SSDs means that an increased number of bugs compared to simpler storage media is a statistical certainty. There is just a lot going on in these devices.

Bugs aren’t necessarily the end of the world. They’re something understood, a fact of life, and there’s this magical thing engineers thought of called… Patching!

As a fun exercise, go to the firmware download pages of various popular SSDs and check the release notes for some of the bugs fixed. Many fixes address some rather abject gibbering horrors… 🙂

Even costlier enterprise SSDs have been afflicted by some really dangerous bugs – usually latent defects (as in: they don’t surface until you’ve been using something for a while, which may explain why these bugs were missed by QA).

I fondly remember a bug that hit some arrays at a previous place of employment: the SSDs would work great but after a certain number of hours of operation, if you shut your machine down, the SSDs would never come up again. Or, another bug that hit a very popular SSD that would downsize itself to an awesome 8MB of capacity (losing all existing data of course) once certain conditions were met.

Clearly, these are some pretty hairy situations. And, what’s more, RAID, checksums and node-level redundancy wouldn’t protect against all such bugs.

For instance, think of the aforementioned power off bug – all SSDs of the same firmware vintage would be affected simultaneously and the entire array would have zero SSDs that functioned. This actually happened, I’m not talking about a theoretical possibility. You know, just in case someone starts saying “but SSDs are reliable, and think of all the RAID!”

It’s all about approaching correctness from a holistic point of view. Multiple lines of defense are necessary.

The Rules: How True Enterprise Storage Deals with Firmware

Just like with Fight Club, there are some basic rules storage systems need to follow when it comes to certain things.

  1. Any firmware patching should be a non-event. Doesn’t matter what you’re updating, there should be no downtime.
  2. ANY firmware patching should be a NON-EVENT. Doesn’t matter what you’re updating, there should be NO downtime!
  3. Firmware updates should be automated even when dealing with devices en masse.
  4. The customer should automatically be notified of important updates they need to perform.
  5. Different vintage and vendor component updates should be handled automatically and centrally. And, most importantly: Safely.

If these rules are followed, bug risks are significantly mitigated and higher uptime is possible. Enterprise arrays typically will follow the above rules (but always ask the vendor).

Why Firmware Updating is a Challenge with Some Storage Solutions

Certain kinds of solutions make it inherently harder to manage critical tasks like component firmware updates.

You see, being able to hot-update different kinds of firmware in any given set of hardware means that the mechanism doing the updating must be intimately familiar with the underlying hardware & software combination, however complex.

Consider the following kind of solution, maybe for someone sold on the idea that white box approaches are the future:

  • They buy a bunch of diskless server chassis from Vendor A
  • They buy a bunch of SSDs from Vendor B
  • They buy some Software Defined Storage offering from Vendor C
  • All running on the underlying OS of Vendor D…

Now, let’s say Vendor B has an emergency SSD firmware fix they made available, easily downloadable on their website. Here are just some of the challenges:

  1. How will that customer be notified by Vendor B that such a critical fix is available?
  2. Once they have the fix located, which Vendor will automate updating the firmware on the SSDs of Vendor B, and how?
  3. How does the customer know that Vendor B’s firmware fix doesn’t violently clash with something from Vendor A, C or D?
  4. How will all that affect the data-serving functionality of Vendor C?
  5. Can any of Vendors A, B, C or D orchestrate all the above safely?
  6. With no downtime?

In most cases I’ve seen, the above chain of events will not even progress past #1. The user will simply be unaware of any update, simply because component vendors don’t usually have a mechanism that alerts individual customers regarding firmware.

You could inject a significant permutation here: What if you buy the servers pre-built, including SSDs, from Vendor A, including full certification with Vendors C and D? 

Sure – it still does not materially change the steps above. One of Vendors A, C or D still need to somehow:

  1. Automatically alert the customer about the critical SSD firmware fix being available
  2. Be able to non-disruptively update the firmware…
  3. …While not clashing with the other hardware and software from Vendors A, C and D
I could expand this type of conversation to other things like overall environmental monitoring and checksums but let’s keep it simple for now and focus on just component firmware updates…

Always Remember – Solve Business Problems & Balance Risk

Any solution is a compromise. Always make sure you are comfortable with the added risk certain areas of compromise bring (and that you are fully aware of said risk).

The allure of certain approaches can be significant (at the very least because of lower promised costs). It’s important to maintain a balance between increased risk and business benefit.

In the case of SSDs specifically, the utter criticality of certain firmware updates means that it’s crucially important for any given storage solution to be able to safely and automatically address the challenge of updating SSD firmware.

D