EMC’s VNX2 forklift: The importance of hardware re-use, slowing down obsolescence, and maximizing your investment

It was with interest that I watched the launch of EMC’s VNX refresh. The updated boxes got some long-awaited features, EMC talked a lot about how some pretty severe single-threaded bottlenecks were removed, more CPU and memory was put in, and there was much rejoicing.

Not really trying to pick on the new boxes (that will be in a future post, relax :)), but what I thought was interesting was that the code that makes most of the new features a possibility cannot be loaded on current-gen VNX boxes (not even the biggest, the 7500, which has plenty of CPU and RAM juice even compared to the next gen boxes).

Software-defined storage indeed.

The existing VNX disk shelves also seemingly can’t be re-used at the moment (correct me if I’m wrong please).

This forced obsolescence has been a theme with EMC: Clariion -> VNX -> VNX2 are all complete forklift upgrades. When the original VNX was released, it was utterly incompatible with the CX (Clariion) shelves (SAS replaced FC connectivity). Despite using the exact same code (FLARE).

Other vendors are guilty of this too – a new controller is released, and all prior investments on disk shelves are rendered useless (HDS did this with several iterations of AMS, maybe even with AMS -> HUS, HP with EVA…)

I understand that as technology progresses we sometimes have to abandon the old to make room for the new but, at the same time, customers make significant investments in N-1 technology – and often want to be able to re-use some of their investment with N (and sometimes N+1).

I just had this conversation with a customer, and he said “well, I throw away my gear every 3 years, why should I care?

Let’s try a thought experiment.

Imagine you just bought a system that’s running the fastest controllers a company sells today, and you got 1PB of storage behind it.

Now, imagine that a mere month after you purchased your controllers, new ones are released that are significantly faster. OK, that stuff happens.

Your gear is not 3 years old yet. It’s 1 month old. It’s running OK.

Now, imagine that your array runs out of steam 6 months later due to unprecedented performance growth. Your system is now 7 months old. You can’t just throw it away and start fresh.

You could buy a new storage system and migrate some of the data to share the load. However, you don’t need more space – you just ran out of controller headroom. Indeed, you still have tons of free space.

But what if you could replace the controllers with the new, beefier ones? And maintain your investment in the 1PB of storage, cache etc? Wouldn’t that be nice?

Or at least be able to move some of the storage pools you may have to the new family of controllers? Even if you had to reformat the disk?

Well – most vendors will tell you “sorry, no, you need to migrate your data to the new box”.

Let’s try another thought experiment.

You bought a storage system a year ago. It performs fine, but it lacks true deduplication capabilities. You have determined it would save you a lot of storage space (= money) if your array had deduplication.

The vendor you purchased the system from announces the refreshed storage OS that finally includes deduplication. And that same vendor made a truly gigantic fuss about software-defined storage, which made everyone feel software would be the big enabler and that coolness was a mere firmware upgrade away.

However, you are eventually told they will not allow the code that enables deduplication to be loaded to your array, and, instead, they ask you to migrate to the refreshed array that supports deduplication. Since the updated code somehow only runs on the new box. Something to do with unicorn milk.

But your array has plenty of CPU headroom to handle deduplication… and you could reformat the disks given some swing storage shelves if the underlying disk format is the issue. But the option is not provided.

How NetApp does things instead

At NetApp we sort of take it for granted so we don’t make a big fuss about software-defined storage, but hardware was always considered an enabler for the software and not the other way around. 

For instance: deduplication was released as a free software upgrade for Data ONTAP (the OS for our main line of storage). Back in 2007. For all storage protocols.

In general, we try to let systems be able to load at a minimum N+1 software releases, but most of the time we utterly spoil customers and go far above and beyond, unless we’re talking about the smallest boxes, which naturally have less headroom.

For example, the now aging FAS 3070 I have in the local lab (the bigger of the older midrange boxes, released in 2006) supports anything from ONTAP 7.2.1 (what it was released with) to ONTAP 8.1.3 (released in mid-2013).

This spans multiple major ONTAP releases – huge changes in the code have happened during those releases: 7.3, 8.0, 8.1… Multiple newer arrays were also released as replacements for the 3070: 3170, 3270, 3250.

Yet the 3070 soldiers on with a fully supported, modern OS, 7 years later.

What arrays did our competitors have back then? What is the most modern OS those same arrays can run today? What is that OS missing vs the OS that competitor’s more modern arrays have?

Let’s talk disk shelves.

We used FC loop connectivity for the older shelves (DS14). We then switched to fancy multi-channel SAS and totally different shelves and disks, but never stopped supporting the older shelf technology, even with newer controllers.

That’s the big one. I have customers with DS14 shelves that they purchased for a 3070 that they now have on a 3270 running 8.2. It all works, all supported. Other vendors cut support off after the transition from FC to SAS.

Will we support those older shelves forever? No, that’s impossible, but at least we give our customers a lot of leeway and let them stretch their hardware investments significantly longer than any other major storage vendor I can think of.

Think long term

I encourage customers to always think long term. Try to stop thinking in 3-year increments. Start thinking of other ways you can stretch your investment, other ways to deploy older gear while still keeping it interoperable with newer hardware.

And start thinking about what will happen to your investment once newer gear is released.


Technorati Tags: , , ,

10 Replies to “EMC’s VNX2 forklift: The importance of hardware re-use, slowing down obsolescence, and maximizing your investment”

  1. Not on-board with this. The 2020 could have run 8.x if NetApp had only spent another 200-300 per unit on a better CPU and more RAM. Likewise for the 2040 and 8.2.

    8.1 not supporting DS14 shelves with ESH2 controller modules? 3210/3140 with Flash Cache unsupported by 8.1?

    Unable to run 8.2 in my primary datacenter because my remote SnapMirror destination, which has plenty of horsepower and space, cannot also run 8.2?

    All this pain is saved on a three year refresh.

    1. Hi Richard,

      I think you just proved the points I was trying to make 🙂

      We still support DS14 with ESH4 modules (4Gbit). You want support for the ancient ESH2 module (2Gbit). You don’t need to throw out your storage – just replace the modules and you can use up to ONTAP 8.2 with ESH4.

      When was ESH2 released again? 🙂

      We still support ESH2 with ONTAP 8.05. Which is ridiculous – those modules were released ages ago.

      Ultra-low-end controllers like the 2020 and 2040 typically are more CPU- and RAM-bound. The point I was making in my article was that if someone buys mid to high, they will enjoy hugely extended release support. With most major competitors, you could buy the biggest box and it would mean nothing.

      Even the 2040 was released supporting 7.3.2 and can still load 8.0.x and 8.1.x. With other vendors you wouldn’t even get support for the 8.0 equivalent, let alone something modern like 8.1.3. You would probably be stuck in the 7.3 release train, and able to go to 7.3.7.

      Yet you can install 8.1.3…



      1. The question is how far up the NetApp product line do I need to go to enjoy this extended support? I insisted on five year maintenance on my recent 2240-4 even though the reseller tried every trick in the book to talk me out of it. Will I regret this decision? I already regret the 2040s. With these guys as SnapMirror destinations, I’m stuck running 8.1.x on my 2240s. 8.2 can’t happen in my shop until the 2040s age out in 2014Q2 and get replaced with something else.

        1. 8.2 is not a big deal for small boxes – it’s mainly for 4+ core systems and especially if you want to run clustered ONTAP.

          So, in your shop there may be no serious need to go to 8.2.

          Was there something specific you needed from 8.2?

          In general though, to answer your question: to get the max latitude regarding what you can load on a box you should be on mid-mid and up. For example, not 3210, but 3240 instead and up.

          The higher up the range, the more future releases you will be able to load.

          Still – the key here is that you are even able to load 8.1.3 on that little 2040.

          Don’t lose sight of that.

  2. I think the key thing here is that if you bought a VNX system off EMC today. You would not be able to upgrade to the VNX2 code when it’s released.

    AFAIK, there is nothing in the NetApp product line which, if bought today, could not be upgraded to N+1 software and in many cases continue to take the latest and greatest code for years to come.

    But it’s also worth bearing in mind that just because version 8.2 is available, it doesn’t mean you should be upgrading to it. In many cases people chase firmware upgrades for no reason other than it’s new. This results in unnecessary risk to your service and extra planned outages. We have many 3040 filers which have uptimes in excess of 800 days which we’re only just upgrading to 7.3.7. There has been no need before now to upgrade and they’ve never skipped a beat.

  3. Dimitris,

    As a brief follow up to this post (for posterity), I confirmed that drives and DAEs from a VNX can be re-used for VNX Rockies. The ported drives may require a firmware update to handle MCR functionality but I haven’t been able to confirm that from engineering/support notes.

    Also while the new VNX line also supports removal the physical removal of an entire RAID Group from the array with subsequent reinsertion and recognition, that does not apply with the VNX generation porting. That is, the DAEs and drives will work but the data will be lost. However, once the drives are successfully running under the new MCx then they gain that functionality. I’d like to test that between VNX Rockies systems but I need two lab systems to do that. If you’re interested I’ll let you know once I manage that.


    1. Hi Matt,

      AFAIK (and I checked), at the time of writing, my article was accurate and EMC did not allow re-using DAEs from the VNX with VN2.

      Now I guess they do per your comment (but you still need to wipe the data).

      My point is: EMC (and most other manufacturers) make forklift upgrades a necessity. CX -> VNX – impossible.

      At least with the NetApp FAS systems we allowed the old FC shelves to be used with newer controllers and firmware.

      Investment protection isn’t a small thing.



      1. (Especially since we know each other, I occasionally flag some of your posts for later follow up in case things change and I know your site is a great resource for many. I try not to resurrect posts unless it’s truly pertinent and if the data isn’t broadly known.)

        Of course investment protection is a huge thing. On that line though, it is pertinent to address the scenario you listed regarding the “one month old” purchase. Trust me that I used similar scenarios when portraying risks with any vendor that isn’t NetApp or a ZFS based open system. I’ve been part of NetApp deals where NetApp probably won just off that point alone!

        While NetApp certainly has a longer record of supporting older FAS technology, with the VNX/VNXe at least EMC has recently delivered a platform showing in earnest that they are building to the same in recent/current generations/builds with the likely continued support in future platform revisions. While the first gen units only have physical support (with the need to re-lay the data after movement), the current generation of hardware and its operating environment supports a more seamless path.

        We’ll have to wait and see.

        So I wasn’t implying to correct your post as it was written at the time regarding EMC but was updating the material since I know you like the most current data (competitive or not) and since other people will probably land here researching the topic.

        Always the best – cheers!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.