Are some flash storage vendors optimizing too heavily for short-lived NAND flash?

I really resisted using the “flash in the pan” phrase in the title… first, because the term is overused and second, because I don’t believe solid state is of limited value. On the contrary.

However, I am noticing an interesting trend among some newcomers in the array business, desperate to find a flash niche to compete in:

Writing their storage OS around very specific NAND flash technologies. Almost as bad as writing an entire storage OS to support a single hypervisor technology, but that’s a story for another day.

Solid state technology is still too fluid. Unlike spinning disk technology that is overall very reliable and mature and likely won’t see huge advances in the years to come, solid state technology seems to advance almost weekly. New SSD controllers are coming out almost too frequently, and new kinds of solid state storage are either out now (Triple Level Cell, anyone?) or coming in the future (MRAM, ReRAM, FeRAM, PCM, PMC, and probably a lot more that I’m forgetting).

My point is:

How far ahead are certain vendors thinking if they are writing an entire storage OS around the limitations of a class of storage that may look very different in just a year or two?

Some of them go really deep and try to do all kinds of clever optimizations to ensure good wear leveling for the flash chips. Some write their own controller software and use bare NAND flash chips, not even off-the-shelf SSDs. Which is great, but what if you don’t need to do that in two years? Or what if the optimizations need to be drastically different for the new technologies? How long will coding for the new flash technologies take? Or will they be stuck using old technologies? Food for thought.

I guess some of us are in it for the long haul, and some aren’t. “Can’t see the forest for the trees” comes to mind. “Gold rush” also seems relevant.

I strongly believe general-purpose storage OSes need to be flexible enough to be reasonably adaptable to different underlying media. And storage OSes that are specifically designed for solid state storage need to be especially flexible regarding the underlying SSD technology to avoid the problems outlined above, and to avoid the relative lack of reliability of current SSD solutions (another story for another day).

At the moment I don’t see clear winners yet. I see a few great short-term stories, but who has the most flexible architecture to be able to deal with different kinds of technologies for years to come?


Technorati Tags: , ,

8 Replies to “Are some flash storage vendors optimizing too heavily for short-lived NAND flash?”

  1. Dmitris:

    Hope you’re doing well! Just curious how you see NTAP’s strategy to build an OS specifically around NAND flash for the FlashRay product as different than what the startups like Pure Storage are doing? I watched Brian Pawlowski’s video and it sure sounds like he’s describing exactly what Pure has been developing for the past 3+ years in a product that is GA.

    Welcome to the all-flash array party NetApp, but the future is already here!

    What I can tell you about Pure is that we include a Flash Personality Layer in our Purity Operating Environment that allows us to easily adopt new SSD-based memory technologies – not just NAND flash – into our product as they become available and proven. It’s much easier to implement a new personality profile in our controller software for a new SSD type than it is to re-engineer hardware around a new memory technology as some other startups would have to do.


  2. hmm, where to start on the topic of Pure Vs. NetApp. I will admit that Pure storage has one of the best videos promoting it’s storage product, it basically made something as mundane as a VC funded storage company look cool and exciting, but that’s about as far as it goes for pure. Still very very very BETA not GA. Integration with CIsco UCS, I got a ton of “we will get back to you’s” from pure reps.? How about front ending other storage arrays, not at this time? No storage workload tiering. What about replication, not yet according to the reps I spoke to.

    So wait tell me you have the most efficient inline dedupe algorithm yet no replication functionality? Really?

    In today’s enterprise level environments, the use of OTV, converged fabrics, Unified Compute Systems rely heavily on array to array sensible replication, for things like providing Virtual Servers the ability to either failover to a secondary array or even live within a stretched fabric.

    As far as NetApp being late to the game with Flash technology, perhaps you should dig a little deeper. Ever hear of a PAM Card? and now Flash Pools with storage tiering? Plus the ability to replicate “snap mirror” and the ability to integrate with vSPhere API’s via SRM / SRA, front end another manufacturers array, oh and vendor validated design specifications, (flexpod, anyone?).

    Again, nice video, must be fun to work in Palo Alto with VC money, making commercials with dinosaurs, however if you want me to value prop a storage subsystem, I need feature sets not fancy marketing.

  3. Dmitris:

    Thanks for the quick reply. I can tell you that Pure is far from “very very very BETA” – the product is GA. And thanks for holding back your passionate opinions!

    Still, you haven’t answered my question about a purpose built storage OS specific for flash. The point of your blog was that the new storage vendors are spending too much time optimizing for NAND flash, and I questioned NTAP’s strategy with regard to how the FlashRay project is different than what you took issue with. I didn’t ask how Pure compares to NTAP’s ONTAP technology.

    I appreciate and respect the legacy NTAP ONTAP technology, but that’s what it is – great legacy technology designed for mechanical disk that was truly disruptive back in the day. Fast forward to 2013 and NTAP has simply validated Pure’s approach of building a new storage OS specific for solid state media by publicly announcing their intent to do the same with FlashRay.

    Let me re-ask the same question again: What is different about NTAP’s FlashRay strategy than what a start-up like Pure is building? Please don’t compare ONTAP again – that’s not what I’m asking.

    All the best!


    1. Hi Mike,

      Not sure who the person that replied before is. I don’t moderate comments.

      But first, I’d like to say one thing: using some other vendor’s high-traffic blog to promote your product and cast FUD on the same vendor’s products is not the most elegant approach, given that I didn’t mention any vendor names in my article but rather commented on a general design approach. Of course, I do understand you guys need all the publicity you can get, and I will abide by my rule to not delete comments in general. Just don’t abuse it.

      But I’ll humor you a bit.

      At the same time I won’t divulge top-secret information about how our new array works.

      So I will say only a couple of things:

      It doesn’t suffer from the pitfalls I outlined in my article. It is a long-term architecture with super-high sustained performance at very low latencies, and maintains compatibility with some of our core DNA.

      I will also say that even though it serves your company to say so, flash technology is not quite ready yet for cost-effective, very large capacity deployments. You don’t have to reply saying you disagree πŸ˜‰

      All I’m saying is, there’s a place for everything. There’s no one storage system that can do it all. I can argue ONTAP can do about 90% of general storage duties (and is therefore applicable to most businesses), but there’s nothing that will do 100% and cover ALL possible use cases.

      That would be like asking “what’s the best car?” πŸ™‚

      The answer is – it depends on a bunch of things…



  4. Dmitris:

    Apologies for my oversight on the last reply. My intent for my original post was to point out that there do exist products that address some of the pitfalls you identified in your blog post. There are some flash-based storage products that do suffer from the shortsightedness you describe, but not all. That said, I’m pleased to see you and I are on the same page with respect to this market (it is still evolving) and no one storage product can meet everyone’s needs. Looking forward to hearing more about FlashRay as things become publicly available and competing with NTAP in 2014!


  5. Mike –
    I apologize if you perhaps mistook my comments as opinion. Believe me I like the idea of a flash only storage array, much like the EF540. However as an employee of a well known VAR, it’s hard from a positioning standpoint to sell an array that, yes, is fast and offers inline dedupe. However, when I get asked about replication, integration with Cisco Unified Compute systems (we sell a ton of these) and the ability to front end other storage arrays the answer from Pure was “not yet”, that’s typically going to drive the sale away towards other vendors whom offer those feature sets, until Pure can toe the line with building in those offerings. Perhaps my beta comment is more positioned to those customers who don’t have the ability / budget to completely segment their storage subsystems and are looking for utility / functionality /ease of use, along with performance

    Dimitris – sorry for the random post, not attempting to stir the pot, I’m usually just an observer of your blog, however I had a two hour presentation from Pure that left me with more questions and less answers. I hadn’t even considered your original point about designing an OS around a NAND type, which is a pretty good point.

  6. For my thinking, i agree with your sentiments Dimitris; the vast amount of new players creating purpose built architectures and platforms optimised around NAND flash technology as it exists today is nothing short of astounding. Typically, these players are focused on one thing – performance ; and then all the rich data services we have grown used to and expect are being re-developed from scratch.

    I think the real winners here will be the established storage players who can successfully optimise and integrate the flash technology of the day into their existing platforms … NAND is already basically at end of life in terms of scaling and core technology. the longer term non volatile memory technologies will appear in a year or so – and these will quickly replace NAND… hence will the NAND optimised products of today be able to easily accomodate the newer resistive memory or phase change memory NVM technologies??

    Lastly, remember, an Orchestra needs a lot more than a violin to be successful… hence, don’t forget about the rich data services which are taken for granted and expected in our data centers.

  7. This is a very compelling topic for me right now as we are trying to figure out a way to improve our software build times. Currently the build is done on DAS array using SSD disk and a “RAM drive” that was dreamed up by our build support team. We would obviously prefer (from an IT standpoint) to leverage our investment in our VMWare (Flexpod) but the IO performance seems to be causing performance issues. We’re leveraging NFS over 10G.

    So just last week at a conference down in Palo Alto at VMWare’s office we were introduced to PURE, and it’s a very compelling story.

    I’m wondering if, in the few months that have passed since this was written, if anything has substantively changed.

    Any thoughts or comments?

    Thanks for this blog, I’ve been reading for almost 2 hours…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.