Should EMC move to more multi-functional devices?

Here’s the deal: EMC has a lot of cool stuff. Lots of it came through acquisitions. Lots of it runs on the x86 platform, believe it or not.

At the moment one needs to buy multiple boxes from EMC to do NAS, SAN, archiving, etc.

Imagine if you got instead generic boxes (with their power relative to their cost, there could be a few models).

In each box you could run a Clariion, Centera, Celerra, a print server, WAAS (even though it’s Cisco it’s really a Linux box), something like Recoverpoint, and so on.

All the products could be custom Virtual Machine Appliances, possibly running on a modified ESX platform (so you can’t just run them anywhere). You’d get all the benefits of cool technologies such as VMotion and HA. You could easily add to it.

This doesn’t preclude the use of specialized hardware to accelerate certain functions, though in this age of quad-core CPUs even that may be unnecessary.

Think about it. EMC owns the IP for all that technology.

They don’t need to make less money – if anything, since all the platforms would be virtual, production would be greatly streamlined. They could even have a single type of box (say a quad quad with tons of RAM and expansion capability) as the hardware. You need more speed for NAS? Add an extra box, an extra license and load-balance a new virtual data mover.

This of course is unattainable at the moment – I don’t think VMware can provide such low latency and high throughput but maybe I’m wrong.

Such a move won’t fix the proliferation of management interfaces, but EMC could build a common interface.

Thoughts?

D

7 Replies to “Should EMC move to more multi-functional devices?”

  1. Given the current virtualization architecture, the main limiting factor will be the visibility of the HBAs of the physicals servers to the virtual machines. Right now, VM’s cannot directly access the HBA, only the virtual scsi devices presented by the vmkernel.

  2. Modifying VMware to maybe allow direct access to some custom EMC HBAs might do the trick. It definitely wouldn’t be vanilla ESX.

    Or, one might go the NetApp route and not present raw LUNs to the outside world, though I don’t think that’s a good idea.

    Another way to do it would be to NOT use VMware, but instead decide on a common platform that would run the different platforms simply as apps. Now THAT is hard since, even though many of the platforms are Linux, not all are, and among the Linux ones, not all are the same flavor, or same kernel. There are also kernel modifications galore. I’d also be worried about one app interfering with the rest.

    Still, it would be cool to see some decent convergence here. This is NetApp’s claim to fame though, as I said, I don’t quite agree with their approach since everything becomes WAFL at the back end.

    D

  3. I had to scrap one of our CX700 systems (we destroy systems for data security purposes). I was a little horrified to discover that the SPs in the CX700 are vanilla looking x86 boards running Windows XP. I’m really not making this up – the SP is a vanilla looking board with dual 3.06GHz Xeons, 4GB RAM, and a Windows XP embedded COA!

    I do not understand how any logical thought process could have concluded that Windows XP was a good idea for this application.

  4. I forgot to add that my last comment wasn’t just intended as a swipe at EMC.

    My point was that the heart of the CX series EMC arrays is basically a glorified PC. So you would expect that virtualising the software in the storage processors to be completely trivial.

  5. To be frank, practically EVERYONE’s storage nowadays effectively runs on commodity hardware. The differentiators really are software and how much monkeying with the commodity hardware has occurred.

    Swipe at EMC or not, I’ve heard this enough times that the record has to be set straight (not working for EMC, if anyone from EMC is reading and cares to elaborate feel free).

    It’s not as commodity as you’d think in EMC’s case – the memory is mirrored across the 2 controllers, for instance. Doesn’t sound like a vanilla XP install to me or a vanilla x86 board… and guess what, it’s not.

    Saying an EMC CX700 is running XP is about the same as saying “ESX really is RedHat with some extra services”.

    Another thing: There’s nothing inherently wrong with the NT kernel itself, really. The crap people like to pile on TOP of it is a different matter. Same as Linux – Linux means the kernel, NOT the whole distribution. A Brocade switch running a modified version of the Linux kernel is NOT really running a pared-down version of RedHat plus some special Brocade daemons.

    There are some pretty serious issues preventing the CX700 OS from running on VMware, it is NOT trivial as you think it is. Feasible, yes.

    D

  6. Mirrored memory is surprisingly easy to implement on standard hardware. I guarantee that if you look closely at the SP boards in the CX700, you’ll find either an Infiniband or Myrinet chipset.

    Great blog BTW.

  7. Probably, there are a few ways to do the mirroring quickly – the point still remains that all those ways don’t work with VMware, so it’s not totally trivial to virtualize the Clariion processors.

    Glad you like the blog. Any ideas on what you’d like to see?

    D

Leave a comment for posterity...