The driver behind this has been to transform application performance, not by increments but by leaps and bounds. Think orders of magnitude in reduction of execution time. For instance, at an organization performing Alzheimer’s cure research, they had a certain key analytics operation that took 22 minutes for each iteration (and they need to do many, many iterations). With a Memory-Driven system from HPE it now takes 13 seconds. This allows the researchers to reach useful results much faster – which, in turn, means the cure could materialize in a much shorter timeframe.
The ability to achieve such dramatic improvements can utterly change the game for businesses and society overall. And not just for niche applications – but also for anything that needs real-time response (for instance, self-driving cars, smart cities/IoT), or deep yet fast analytics (genomics, security), business (finance, simulations).
Simply put: The more work that can be done in-memory, the less the execution time.
One of the main tenets of Memory-Driven Architectures is to eliminate I/O and instead work as much as possible within gigantic memory spaces.
Another tenet is to create said gigantic memory spaces by combining DRAM with ultra-fast NVM (Non-Volatile Memory, nothing to do with NVMe, which is just a protocol).
This is done in order to avoid huge memory costs, yet maintain high performance. Keeping the data in NVM also ensures persistence and further reduced reliance on the storage back-end.
FYI: this doesn’t mean a RAM disk.
In order to fully take advantage of Memory-Driven Architectures, applications often have to be modified, and follow a different design paradigm. For instance:
- Be written with large-scale NUMA awareness instead of SMP
- Use global shared memory for messaging
- Use references instead of copies
- Assume vast memory spaces and some special hardware.
Not all customers are willing (or able) to make such big changes immediately, so a natural, low-risk approach is to make the storage systems much faster instead.
The challenge: On one hand, storage arrays with just NAND flash are more cost-effective but can’t offer enough performance to satisfy such performance increases. On the other hand, storage systems entirely composed of Storage-Class Memory (SCM) would be prohibitively expensive at scale (and potentially lack enterprise-grade features).
But what if we enhanced storage arrays and turned them into Memory-Driven Storage? Without losing the functionality people love, and without an exorbitant price tag?
Extending Memory-Driven Architecture to HPE Storage
Both HPE 3PAR and Nimble Storage run on operating systems that were already heavily memory-centric and memory-optimized, so it was an easy decision to make.
We also wanted to stick to our practice of not writing a ton of code for a very specific device (or, indeed, spending the engineering effort to design custom storage media). It is crucial for an architecture to be able to quickly and flexibly embrace general advances in technology. Indeed, one of our design principles is to abstract the back-end storage from the front-end as much as possible.
By modifying our array code to properly (and generically) take advantage of large and ultra-fast memory devices, we were able to allow the arrays to do more work within an enlarged memory space, and rely less on the back-end media – which is exactly the point of Memory-Driven Architectures.
One of several ways of making this technology work in our arrays was heavily optimizing the path length for certain key operations to take into account the immensely reduced access time of the new memory types compared to NAND flash. One can’t just stuff fancy new memory in a system and call it a day… 🙂
With this technology, we have been able to get the end-to-end I/O latency under load – as seen from the host – below 200 microseconds.
The Benefits of Memory-Driven Storage
We wanted to provide customers a zero-risk way to achieve quick, tangible benefits without asking them to undertake any re-architecture efforts or forklift upgrades. Just ensure you have a system that will take the new memory modules, several current 3PAR and Nimble systems can use this new technology.
Some of the advantages of this approach, in addition to ultra-low latency:
- No host modifications necessary at all
- No special host drivers/filesystem necessary
- Not limited to specific host types or operating systems
- No special HBAs needed
- No application changes needed
- No protocol changes needed
- No array modifications necessary beyond adding the memory modules and having the right code level
- No forklift array replacement necessary
- No need to replace all the array media with (slightly) faster media 😉
- No need to replace the array controllers
- No fabric changes
- Same extreme data integrity and enterprise features as before
- No risk.
For 3PAR, this technology can be ordered today. Indeed, 3PAR is now the first Memory-Driven enterprise storage system in the world. For Nimble, we demonstrated the technology at Discover this week and it will be available in the near future.