Recently, HPE Nimble released new systems (the 20/40/60/80 line, replacing the 1000/3000/5000/7000/9000 ones).
I don’t cover press releases – you can find this elsewhere. I’d rather talk about the cool stuff.
One nice aspect of the new hardware is that for most models, the performance is radically increased vs the old line. For the same money, you get a much faster system. For instance, for the same cost as the 3000, the replacement system (40) is about 3 times faster for most I/O.
It is 100% Nimble-designed hardware. It’s not made by Supermicro BTW, some people were asking.
We use the very latest Intel CPUs, and for the high end 80 we have some rather special ones inside…
Timeless Storage Performance Benefit – a Really Cool Side Effect
Timeless Storage is Nimble’s program for flat maintenance pricing and controller upgrades. In that program, there’s a nice option for getting a 25% faster controller when renewing the maintenance.
Because of how widely Nimble controller performance is spaced apart, the replacement will be much faster than 25%…
For example, with today’s available systems:
- If you had a 1000, you’ll get a 40
- If you had a 3000, you’ll get a 40
- If you had a 5000, you’ll get a 40
- If you had a 7000, you’ll get a 60
- If you had a 9000, you’ll get an 80
So, unlike the various and sundry competitor programs that will try and provide a similar performance controller to what you have today, with Nimble Timeless you could end up with a pretty huge upgrade.
Nimble OS 5 Data Reduction and Performance Enhancements
There are three enhancements in OS 5 I particularly like.
- Adaptive Compression: Automatically switch between lz4 and a stronger compression algorithm on the fly. Pretty nice data reduction boost. No complexity – it’s all automatic, as is the Nimble idiom.
- Higher Performance: Systematic performance enhancements for the entire product line (including older systems). The boost can be pretty major for some workloads.
- Deduplication on Hybrid: Yes, hybrid systems now get full inline deduplication just like the AFAs. We do need 6x SSDs on hybrid to allow fast metadata access, the new systems all come pre-configured properly for this. For older systems, we will allow dedupe for some models, based on the configuration.
These enhancements take an already great storage OS to new levels – I’m very proud of what we’ve done with the code. Especially given the fact that all the high performance and improved data reduction is achieved 100% inline, with no post-processing whatsoever. You don’t have to “take the hit” later on. Because “later on” may be a really bad time to take the performance hit.
All new Nimble systems are SCM (Storage Class Memory) ready. By SCM, I mean a class of nonvolatile memory that’s many times faster than NAND-based SSDs. 3D Xpoint and Z-NAND are good examples of this technology.
Such memory is much more expensive than SSDs and not available in large capacities – which makes it impractical for a large-scale implementation (just like a large capacity RAM-based array is impractical).
However, the sheer speed of it makes it a great candidate for caching.
And guess what Nimble is really, really good at.
Caching, in case you didn’t guess it. Indeed, the company initially built an all-flash caching appliance (little known fact – and a slap in the face for those vendors who spread FUD that Nimble wasn’t made for flash).
Well – what if one put fancy SCM cache in an AFA? Sure, SSDs are fast, but SCM is incredibly faster for things like metadata lookups.
Plus – parallelizing I/O using two different fat pipes to access SCM and back-end SSD leaves the bulk storage SSDs free to do their I/O without being hit by metadata lookups.
This technology will significantly increase performance, especially under heavy pressure. Because nobody likes a fair-weather friend. And it will be a field upgrade.
The new Nimble systems are NVMe-ready in 2 different ways.
- The chassis can take NVMe drives (as well as SAS and SATA).
- The host-side connectivity will be able to use NVMe Over Fabrics.
Using NVMe drives isn’t very exciting by itself – through advanced technologies, good array controllers are meant to separate the back-end media characteristics from the front end after all (look at how fast Nimble hybrid systems are – Nimble hybrid write latencies are lower than with most competitor AFAs, which should be impossible, yet here we are).
NVMe host-side connectivity on the other hand is much more exciting since that’s where hosts will see significant, immediate benefits. For instance:
- Use a newer, lower overhead protocol instead of the encapsulated SCSI that’s older than dirt
- No need to tune hosts for things like queue depth
- Increased performance for typically difficult things like single-threaded workloads
- Increased throughput and IOPS at a given latency point
- Slightly better latency
- Lowered CPU load.
The plan is to do NVMe both over FC and Ethernet. FC has the advantage that Gen5/6 FC solutions can already support NVMe. But with Ethernet – there are a couple of protocol choices (iWARP and ROCEv2) and many customers will need to buy new HBAs/switches with ROCEv2 (which is the better protocol in my opinion – iWARP does have the advantage of not needing fancy new hardware, though).
Important: Some important things like multipathing standards need to mature for host-facing NVMe to become safe for widespread use. The ANA (Asymmetric Namespace Access) standard was just ratified in March 2018 so it hasn’t been widely adopted yet in host MPIO stacks.
So, if you want to do NVMeOF, make sure your host has baked support for ANA (and that the storage system you’re looking at does too). Multipathing is probably something you expect to have…
For some extra information on Nimble SCM and NVMe, here’s an article from our CTO, Jeff Kimmel.
The new HPE Nimble systems represent a big step forward – not just because of the typical (large) performance enhancements, but also because they are fully ready for NVMe and SCM technologies, that we will provide to the public in a very easily consumable way that makes sense.