Category Archives: General

General category for posts

Are you doing a disservice to your company with RFPs?

Whether we like it or not, RFPs (Request For Proposal) are a fact of life for vendors.

It usually works like this: A customer has a legitimate need for something. They decide (for whatever reason) to get bids from different vendors. They then craft an RFP document that is either:

  1. Carefully written, with the best intentions, so that they get the most detailed proposal possible given their requirements, or
  2. Carefully tailored by them and the help of their preferred vendor to box out the other vendors.

Both approaches have merit, even if #2 seems unethical and almost illegal. I understand that some people are just happy with what they have so they word their document to block anyone from changing their environment, turning the whole RFP process into an exercice in futility. I doubt that whatever I write here will change that kind of mindset.

However – I want to focus more on #1. The carefully written RFP that truly has the best intentions (and maybe some of it will rub off on the #2 “blocking” RFP type folks).

Here’s the major potential problem with the #1 approach:

You don’t know what you don’t know. For example, maybe you are not an expert on how caching works at a very low level, but you are aware of caching and what it does. So – you know that you don’t know about the low-level aspects of caching (or whatever other technology) and word your RFP so that you learn in detail how the various vendors do it.

The reality is – there are things whose existence you can’t even imagine – indeed, most things:

WhatUknow

By crafting your RFP around things you are familiar with, you are potentially (and unintentionally) eliminating solutions that may do things that are entirely outside your past experiences.

Back to our caching example – suppose you are familiar with arrays that need a lot of write cache in order to work well for random writes, so you put in your storage RFP requirements about very specific minimum amounts of write cache.

That’s great and absolutely applicable to the vendors that write to disk the way you are familiar with.

But what if someone writes to disk entirely differently than what your experience dictates and doesn’t need large amounts of write cache to do random writes even better than what you’re familiar with? What if they use memory completely differently in general?

Another example where almost everyone gets it wrong is specifying performance requirements. Unless you truly understand the various parameters that a storage system needs in order to properly be sized for what you need, it’s almost guaranteed the requirements list will be incomplete. For example, specifying IOPS without an I/O size and read/write blend and latency and sequential vs random – at a minimum – will not be sufficient to size a storage system (there’s a lot more here in case you missed it).

By setting an arbitrary limit to something that doesn’t apply to certain technologies, you are unintentionally creating a Type #2 RFP document – and you are boxing out potentially better solutions, which is ultimately not good for your business. And by not providing enough information, you are unintentionally making it almost impossible for the solution providers to properly craft something for you.

So what to do to avoid these RFP pitfalls?

Craft your RFP by asking questions about solving the business problem, not by trying to specify how the vendor should solve the business problem.

For example: Say something like this about space savings:

“Describe what, if any, technologies exist within the gizmo you’re proposing that will result in the reduction of overall data space consumption. In addition, describe what types of data and what protocols such technologies can work with, when they should be avoided, and what, if any, performance implications exist. Be specific.”

Instead of this:

“We need the gizmo to have deduplication that works this way with this block size plus compression that uses this algorithm but not that other one“.

Or, say something like this about reliability:

“Describe the technologies employed to provide resiliency of data, including protection from various errors, like lost or misplaced writes”.

Instead of:

“The system needs to have RAID10 disk with battery-backed write cache”.

It’s not easy. Most of us try to solve the problem and have at least some idea of how we think it should be solved. Just try to avoid that instinct while writing the RFP…

And, last but not least:

Get some help for crafting your RFP. We have this website that will even generate one for you. It’s NetApp-created, so take it with a grain of salt, but it was designed so the questions were fair and open-ended and not really vendor-specific. At least go through it and try building an RFP with it. See if it puts in questions you hadn’t thought of asking, and see how things are worded.

And get some help in getting your I/O requirements… most vendors have tools that can help with that. It may mean that you are repeating the process several times – but at least you’ll get to see how thorough each vendor is regarding the performance piece. Beware of the ones that aren’t thorough.

D

NetApp delivers 1.3TB/s performance to giant supercomputer for big data

(Edited: My bad, it was 1.3TB/s, not 1TB/s).

What do you do when you need so much I/O performance that no one single storage system can deliver it, no matter how large?

To be specific: What if you needed to transfer data at 1TB per second? (or 1.3TB/s, as it eventually turned out to be)?

That was the problem faced by the U.S. Department of Energy (DoE) and their Sequoia supercomputer at the Lawrence Livermore National Laboratory (LLNL), one of the fastest supercomputing systems on the planet.

You can read the official press release here. I wanted to get more into the technical details.

People talk a lot about “big data” recently – no clear definition seems to exist, in my opinion it’s something that has some of the following properties:

  • Too much data to be processed by a “normal” computer or cluster
  • Too much data to work with using a relational DB
  • Too much data to fit in a single storage system for performance and/or capacity reasons – or maybe just simply:
  • Too much data to process using traditional methods within an acceptable time frame

Clearly, this is a bit loose – how much is “too much”? How long is “too long”? For someone only armed with a subnotebook computer, “too much” does not have the same meaning as for someone rocking a 12-core server with 256GB RAM and a few TB of SSD.

So this definition is relative… but in some cases, such as the one we are discussing, absolute – given the limitations of today’s technology.

For instance, the amount of storage LLNL required was several tens of PB in a single storage pool that could provide unprecedented I/O performance to the tune of 1TB/s. Both size and performance needed to be scalable. It also needed to be reliable and fit within a reasonable budget and not require extreme space, power and cooling. A tall order indeed.

This created some serious logistics problems regarding storage:

  • No single disk array can hold that amount of data
  • No single disk array can perform anywhere close to 1TB/s

Let’s put this in perspective: The storage systems that scale the biggest are typically scale-out clusters from the usual suspects of the storage world (we make one, for example). Even so, they max out at less PB than the deployment required.

The even bigger problem is that a single large scale-out system can’t really deliver more than a few tens of GB/s under optimal conditions – more than fast enough for most “normal” uses but utterly unacceptable for this case.

The only realistic solution to satisfy the requirements was massive parallelization, specifically using the NetApp E-Series for the back-end storage and the Lustre cluster filesystem.

 

A bit about the solution…

Almost a year ago NetApp purchased the Engenio storage line from LSI. That storage line is resold by several companies like IBM, Oracle, Quantum, Dell, SGI, Teradata and more. IBM also resells the ONTAP-based FAS systems and calls them “N-Series”.

That purchase has made NetApp the largest provider of OEM arrays on the planet by far. It was a good deal – very rapid ROI.

There was a lot of speculation as to why NetApp would bother with the purchase. After all, the ONTAP-based systems have a ton more functionality than pretty much any other array and are optimized for typical mostly-random workloads – DBs, VMs, email, plus megacaching, snaps, cloning, dedupe, compression, etc – all with RAID6-equivalent protection as standard.

The E-Series boxes on the other hand don’t do thin provisioning, dedupe, compression, megacaching… and their snaps are the less efficient copy-on-first-write instead of redirect-on-write. So, almost the anti-ONTAP :)

The first reason for the acquisition was that, on purely financial terms, it was a no-brainer deal even if one sells shoes for a living, let alone storage. Even if there were no other reasons, this one would be enough.

Another reason (and the one germane to this article) was that the E-Series has a tremendous sustained sequential performance density. For instance, the E5400 system can sustain about 4GB/s in 4U (real GB/s, not out of cache), all-in. That’s 4U total for 60 disks including the controllers. Expandable, of course. It’s no slouch for random I/O either, plus you can load it with SSDs, too… :)

Again, note – 60 drives per 4U shelf and that includes the RAID controllers, batteries etc. In addition, all drives are front-loading and stay active while servicing the shelf – as opposed to most (if not all) dense shelves in the market that need the entire (very heavy) shelf pulled out and/or several drives offlined in order to replace a single failed drive… (there’s some really cool engineering in the shelf to do this without thermal problems, performance loss or vibrations). All this allows standard racks and no fear of the racks tipping over while servicing the shelves :) (you know who you are!)

There are some vendors that purely specialize in sequential I/O and tipping racks – yet they have about 3-4x less performance density than the E5400, even though they sometimes have higher per-controller throughput. In a typical marketing exercise, some of our more usual competitors have boasted 2GB/s/RU for their controllers, meaning that in 4U the controllers (that take up 4U in that example) can do 8GB/s, but that requires all kinds of extra rack space to achieve (extra UPSes, several shelves, etc). Making their resulting actual throughput number well under 1GB/s/RU. Not to mention the cost (those systems are typically more expensive than a 5400). Which is important with projects of the scale we are talking about.

Most importantly, what we accomplished at the LLNL was no marketing exercise…

 

The benefits of truly high performance density

Clearly, if your requirements are big enough, you end up spending a lot less money and needing a lot less rack space, power and cooling by going with a highly performance-dense solution.

However, given the requirements of the LLNL, it’s clear that you can’t use just a single E5400 to satisfy the performance and capacity requirements of this use case. What you can do though is use a bunch of them in parallel… and use that massive performance density to achieve about 40GB/s per industry-standard rack with 600x high-capacity disks (1.8PB raw per rack).

For even higher performance per rack, the E5400 can use the faster SAS or SSD drives – 480 drives per rack (up to 432TB raw), providing 80GB/s reads/60GB/s writes.

 

Enter the cluster filesystem

So, now that we picked the performance-dense, reliable, cost-effective building block, how do we tie those building blocks together?

The answer: By using a cluster filesystem.

Loosely defined, a cluster filesystem is simply a filesystem that can be accessed simultaneously by the servers mounting it. In addition, it also typically means it can span storage systems and make them look as one big entity.

It’s not a new concept – and there are several examples, old and new: AFS, Coda, GPFS, and the more prevalent Stornext and Lustre are some.

The LLNL picked Lustre for this project. Lustre is a distributed filesystem that breaks apart I/O into multiple Object Storage Servers, each connected to storage (Object Storage Targets). Metadata is served by dedicated servers that are not part of the I/O stream and thus not a bottleneck. See below for a picture (courtesy of the Lustre manual) of how it is all connected:

 

Lustre Scaled Cluster

 

High-speed connections are used liberally for lower latency and higher throughput.

A large file can reside on many storage servers, and as a result I/O can be spread out and parallelized.

Lustre clients see a single large namespace and run a proprietary protocol to access the cluster.

It sounds good in theory – and it delivered in practice: 1.3TB/s sustained performance was demonstrated to the NetApp block devices. Work is ongoing to finalize the testing with the complete Lustre environment. Not sure what the upper limit would be. But clearly it’s a highly scalable solution.

 

Putting it all together

NetApp has fully realized solutions for the “big data” applications out there – complete with the product and services needed to complete each engagement. The Lustre solution employed by the LLNL is just one of the options available. There is Hadoop, Full Motion uncompressed HD video, and more.

So – how fast do you need to go?

D

 

 

Technorati Tags: , ,

 

Stack Wars: The Clone Wars

It seems that everyone and their granny is trying to create some sort of stack offering these days. Look at all the brouhaha – HP buying 3Par, Dell buying Compellent, all kinds of partnerships being formed left and right. Stacks are hot.

To the uninitiated, a stack is what you can get when a vendor is able to offer multiple products under a single umbrella. For instance, being able to get servers, an OS, a DB, an email system, storage and network switches from a single manufacturer (not a VAR) is an example of a single-sourced stack.

The proponents of stacks maintain that with stacks, customers potentially get simpler service and better integration – a single support number to call, “one throat to choke”, no finger-pointing between vendors. And that’s partially true. A stack potentially provides simpler access to support. On the “better integration” part – read on.

My main problem with stacks is that nobody really offers a complete stack, and those that are more complete than others, don’t necessarily offer best-of-breed products (not even “good enough”), nor do they offer particularly great integration between the products within the stack.

Personal anecdote: a few years ago I had the (mis)fortune of being the primary backup and recovery architect for one of the largest airlines in the world. Said airline had a very close relationship with a certain famous vendor. Said vendor only flew with that airline, always bought business class seats, the airline gave the vendor discounts, the vendor gave the airline discounts, and in general there was a lot of mutual back-scratching going on. So much so that the airline would give that vendor business before looking at anyone else, and only considered alternative vendors if the primary vendor didn’t have anything that even smelled like what the airline was looking for.

All of which resulted in the airline getting a backup system that was designed for small businesses since that’s all that vendor had to offer (and still does).

The problem is, that backup product simply could not scale to what I needed it to. I ended up having to stop file-level logging and could only restore entire directories since the backup database couldn’t handle the load, despite me running multiple instances of the tool for multiple environments. Some of those directories were pretty large, so you can imagine the hilarity that ensued when trying to restore stuff…

The vendor’s crack development team came over from overseas and spent days with me trying to figure out what they needed to change about the product to make it work in my environment (I believe it was the single largest installation they had).

Problem is, they couldn’t deliver soon enough, so, after much pain, the airline moved to a proper enterprise backup system from another vendor, which fixed most problems, given the technology I had to work with at the time.

Had the right decision been made up front, none of that pain would have been experienced. The world would have been one IT anecdote short, but that’s a small price to pay for a working environment. And this is but just one way that single vendor stacks can fail.

How does one decide on a stack?

Let’s examine a high-level view of a few stack offerings. By no means an all-inclusive list.

Microsoft: They offer an OS (catering from servers to phones), a virtualization engine, a DB, a mail system, a backup tool and the most popular office apps in the world, among many other things. Few will argue that all the bits are best-of-breed, despite being hugely popular. Microsoft doesn’t like playing in the hardware space, so they’re a pure software stack. Oh, there’s the XBox, too.

EMC: Various kinds of storage platforms (7-10 depending on how you count), all but one (Symmetrix) coming from acquisition. A virtualization engine (80% owner of VMware – even though it’s run as a totally separate entity). DB, many kinds of backup, document management, security and all other kinds of software. Some bits are very strong, others not so much.

Oracle: They offer an OS, a virtualization engine, a DB, middleware, servers and storage. An office suite. No networking. Oracle is a good example of an incomplete software/hardware stack. Aside from the ultra-strong DB and good OS, few will say their products are all best-of-breed.

Dell: They offer servers desktops, laptops, phones, various flavors of storage, switches. Dell is an example of a pure hardware stack. Not many software aspirations here. Few will claim any of their products are best-of-breed.

HP: They offer servers desktops, laptops, phones, even more flavors of storage, a UNIX OS with its own type of virtualization (can’t run x86), switches, backup software, a big services arm, printers, calculators… All in all, great servers, calculators and printers, not so sure about the rest. Fairly complete stack.

IBM: Servers, 2 strong DBs, at least 3 different OSes, CPUs, many kinds of storage,  email system, middleware, backup software, immense services arm. No x86 virtualization (they do offer virtualization for their other platforms). Very complete stack, albeit without networking.

Cisco: All kinds of networking (including telephony), servers. Limited stack if networking is not your bag, but what it offers can be pretty good.

Apple: Desktops, laptops, phones, tablets, networking gear, software. Great example of a consumer-oriented hardware and software stack. They used to offer storage and servers but they exited that business.

Notice anything common about the various single-vendor stacks? Did you find a stack that can truly satisfy all your IT needs without giving anything up?

The fact of the matter is that none of the above companies, as formidable as they are, offers a complete stack – software and hardware. You can get some of the way there, but it’s next to impossible to single-source everything without shooting yourself in the foot. At a minimum, you’re probably looking at some kind of Microsoft stack + server/storage stack + networking stack – mixing a minimum of 3 different vendors to get what you need without sacrificing much (the above example assumes you either don’t want to virtualize or are OK with Hyper-V).

Most companies have something like this: Microsoft stack + virtualization + DB + server + storage + networking – 6 total stacks.

So why do people keep pushing single-vendor stacks?

Only a few valid reasons (aside from it being fashionable to talk about). One of them is control – the more stuff you have from a company, the tighter their hold on you. The other is that it at least limits the support points you have to deal with, and can potentially get you better pricing (theoretically). For instance, Dell has “given away” many an Equallogic box. Guess what – the cost of that box was blended into everything else you purchased, it’s all a shell game. But if someone does buy a lot of gear from a single vendor, there are indeed ways to get better deals. You just won’t necessarily get best-of-breed or even good enough gear.

What about integration?

One would think that buying as much as possible from a single vendor gets you better integration between the bits. Not necessarily. For instance, most large vendors acquire various technologies and/or have OEM deals – if one looks just at storage as an example, Dell has their own storage (Equallogic and Compellent – two different acquisitions) plus an OEM deal with EMC. There’s not much synergy between the various technologies.

HP has their own storage (EVA, Lefthand, Ibrix, Polyserve, 3Par – four different acquisitions) and two OEM deals with Dot Hill for the MSA boxes and HDS for the high-end XP systems. That’s a lot of storage tin to keep track of (all of which comes from 7 different places and 7 totally different codebases), and any HP management software needs to be able to work with all of those boxes (and doesn’t).

IBM has their own storage (XIV, DS6000, DS8000, SONAS, SVC – I believe three homegrown and two acquisitions) and two different OEM deals (NetApp for N Series and LSI Logic for DS5000 and below). The integration between those products and the rest of the IBM landscape should be examined on a case-by-case basis.

EMC’s challenge is that they have acquired too much stuff, making it difficult to provide proper integration for everything. Supporting and developing for that plethora of systems is not easy and teams end up getting fragmented and inconsistent. Far too many codebases to keep track of. This dilutes the R&D dollars available and prolongs development cycles.

Aren’t those newfangled special-purpose multi-vendor stacks better?

There’s another breed of stack, the special-purpose one where a third party combines gear from different vendors, assembles it and sells it as a supported and pre-packaged solution for a specific application. Such stacks are actually not new – they have been sold for military, industrial and healthcare applications for the longest time. Recently, Netapp and EMC have been promoting different versions of what a “virtualization stack” should be (as usual, with very different approaches, check out FlexPod and Vblock).

The idea behind the “virtualization stack” is that you sell the customer a rack that has inside it network gear, servers, storage, management and virtualization software. Then, all the customer has to do is load up the gear with VMs and off they go.

With such a stack, you don’t limit the customer by making them buy their gear all from one vendor, but instead you limit them by pre-selecting vendors that are “best of breed”. Not everyone will be OK with the choice of gear, of course.

Then there’s the issue of flexibility – some of the special-purpose stacks literally are like black boxes – you are not supposed to modify them or you lose support. To the point where you’re not allowed to add RAM to servers or storage to arrays, both limitations that annoy most customers, but are viewed as a positive by some.

Is it a product or a “kit”?

Back to the virtualization-specific stacks: This is the main argument, do you buy a ready-made “product” or a “kit” some third party assembles after following a detailed design guide. As of this writing (there have been multiple changes to how this is marketed), Vblock is built by a company known as VCE – efectively a third party that puts together a custom stack made of different kinds of EMC storage, Cisco switches and servers, VMware, and a management tool called UIM. It is not built by EMC, VMware or Cisco. VCE then resells the assembled system to other VARs or directly to customers.

NetApp’s FlexPod is built by VARs. The difference is that more than one VAR can build FlexPods (as long as they meet some specific criteria) and don’t need to involve a middleman (also translating to more profits for VARs). All VARs building FlexPods need to follow specific guidelines to build the product (jointly designed by VMware, Cisco and NetApp), use components and firmware tested and certified to work together, and add best-of-breed management software to manage the stack.

The FlexPod emphasis is on sizing and performance flexibility (from tiny to gigantic), Secure Multi Tenancy (SMT – a unique differentiator), space efficiency, application integration, extreme resiliency and network/workload isolation – all highly important features in virtualized environments. In addition, it supports non-virtualized workloads.

Ultimately, in both cases the customer ends up with a pre-built and pre-tested product.

What about support?

This has been both the selling point and the drawback of such multi-vendor stacks. In my opinion, it has been the biggest selling point for Vblock, since a customer calls VCE for support. VCE has support staff that is trained on Cisco, VMware and EMC and can handle many support cases via a single support number – obviously a nice feature.

Where this breaks down a bit: VCE has to engage VMware, EMC and Cisco for anything that’s serious. Furthermore, Vblock support doesn’t support the entire stack but stops at the hypervisor.

For instance, if a customer hits an Enginuity (Symmetrix OS) bug, then the EMC Symm team will have to be engaged, and possibly write a patch for the customer and communicate with the customer. VCE support simply cannot fix such issues, and is best viewed as first-level support. Same goes for Cisco or VMware bugs, and in general deeper support issues that the VCE support staff simply isn’t trained enough to resolve. In addition, Vblocks can be based on several different kinds of EMC storage, that itself requires different teams to support it.

Finally – ask VCE if they are legally responsible for maintaining the support SLAs. For instance, who is responsible if there is a serious problem with the array and the vendor takes 2 days to respond instead of 30 minutes?

FlexPod utilizes a cooperative support model between NetApp, Cisco and VMware, and cases are dealt with by experts from all three companies working in concert. The first company to be called owns the case.

When the going gets tough, both approaches work similarly. For easy cases that can be resolved by the actual VCE support personnel, Vblock probably has an edge.

Who needs the virtualization stack?

I see several kinds of customers that have a need for such a stack (combinations are possible):

  1. The technically/time constrained. For them, it might be easier to get a somewhat pre-integrated solution. That way they can get started more quickly.
  2. The customers needing to hide behind contracts for legal/CYA reasons.
  3. The large. They need such huge amounts of servers and storage, and so frequently, that they simply don’t have the time to put it in themselves, let alone do the testing. They don’t even have time to have a PS team come onsite and build it. They just want to buy large, ready-to-go chunks, shipped as ready to use as possible.
  4. The rapidly growing.
  5. Anyone wanting pre-tested, repeatable and predictable configurations.

Interesting factoid for the #3 case (large customer): They typically need extensive customizations and most of the time would prefer custom-built infrastructure pods with the exact configuration and software they need. For instance, some customers might prefer certain management software over others, and/or want systems to come preconfigured with some of their own customizations in-place – but they are still looking for the packaged product experience. FlexPod is flexible enough to allow that without deviating from the design. Of course, if the customer wants to dramatically deviate (i.e. not use one of the main components like Cisco switches or servers, for instance) – then it stops being a FlexPod and you’re back to building systems the traditional way.

What customers should really be looking for when building a stack?

In my opinion, customers should be looking for a more end-to-end experience.

You see – even with the virtualization stack in place, you will need to add:

  • OSes
  • DBs
  • Email
  • File Services
  • Security
  • Document Management
  • Chargeback
  • Backup
  • DR
  • etc etc.

You should partner with someone that can help you not just with the storage/virtualization/server/network stack, but also with:

  • Proper alignment of VMs to maintain performance
  • Application-level integration
  • Application protection
  • Application acceleration

In essense, treat things holistically. The vendor that provides your virtualization stack needs to be able to help you all the way to, say, configuring Exchange properly so it doesn’t break best practices, and ensuring that firmware revs on the hardware don’t clash with, say, software patch levels.

You still won’t avoid support complexity. Sure, maybe you can have Microsoft do a joint support exercise with the hardware stack VAR, but, no matter how you package it, you are going to be touching the support of the various vendors making up the entire stack.

And you know what?

It’s OK.

D

 

Technorati Tags: , , , , ,

 

Updated blog code, plus a bit about NetApp recovery for cloud providers

Sometime last night/this morning a config file in my blog got corrupted. Maybe it got hacked (I was running an ancient WordPress version 2.1) but at any rate the site was down.

It’s hosted on a large, famous service provider, and they use NetApp gear.

I was able to recover my file through NetApp snapshots – The provider makes this trivial by giving all users a GUI for it that looks like a normal file manager. All self-service.

godaddy.png

No Vblocks, Avamar or Data Domain were harmed in the process that literally took all of one second to complete, most of which time was probably spent on Javascript doing its thing and the browser refreshing. BTW, I hadn’t touched that file since 2006.

This is a good example of storage for service providers doing more than just storing data.

With alternative solutions, a ticket would have to be opened, a helpdesk person would have to use a backup tool to find my file and restore it, then let me know. A whole lot more effort than what happened in this case.

In other news, I’m running the latest WordPress code, the site is now auto-optimized for mobile devices, and things are smooth again. Oh, and the old theme that most seemed to hate is gone. I’ll see if I can find a suitable picture for the header, for now this is OK.

If only that old version of WordPress I was using had a clean way of exporting stuff, if you look at older articles you’ll notice weird characters here and there. I might fix it. Probably not.

D

Technorati Tags: ,

Has NetApp sold more flash than any other enterprise disk vendor?

NetApp has been selling our custom cache boards with flash chips for a while now. We have sold over 3PB of usable cache this way.

The question was raised in public forums such as Twitter – someone mentioned that this figure may be more usable Solid State storage than all other enterprise disk vendors have sold combined (whether it’s used for caching or normal storage – I know we have greatly outsold anyone else that does it for caching alone :) ).

I don’t know if it is, maybe the boys from the other vendors can chime in on this and tell us, after RAID, how much usable SSD they’ve sold, but the facts remain:

  • NetApp has demonstrated thought leadership in pioneering the pervasive use of Megacaches
  • The market has widely adopted the NetApp Flash Cache technology (I’d say 3PB of usable cache is pretty wide adoption)
  • The performance benefits in the real world are great, due to the extra-granular nature of the cache (4KB blocks vs 64+ KB for others) and extremely intelligent caching algorithms
  • The cost of entry is extremely reasonable
  • It’s a very easy way to add extra performance without forcing data into faster tiers.

Comments welcome…

D

Technorati Tags: , , , , , , , , ,

NetApp disk rebuild impact on performance (or lack thereof)

Due to the craziness in the previous blog, I decided to post an actual graph showing a NetApp system I/O latency while under load and a disk rebuild. It was a bakeoff vs another large storage vendor (which NetApp won).

The test was done at a large media company with over 70,000 Exchange seats. It was with no more than 84 drives, so we’re not talking about some gigantic lab queen system (I love Marc Farley’s term). The box was set up per best practices, with aggregate size being 28 disks in this case.

(Edited at the request of EMC’s CTO to include the performance tidbit): Over 4K IOPS were hitting each aggregate (much more than the customer needed) and the system had quite a lot of steam left in it.

There were several Exchange clusters hitting the box in parallel.

All of the testing for both vendors was conducted by Microsoft personnel for the customer.  The volume names have been removed from the graph to protect the identity of the customer:

 

Under a 53:47 read/write ratio 8K-size IOPS, a single disk was pulled.  Pretty realistic failure scenario, a disk breaks while the system is under production-level load. Plenty of writes, too, almost 50%.

Ok…  The fuzzy line around 6ms is the read latency.  At point 1 a disk was pulled and at point 2 the rebuild completed.  Read latency increased to 8ms during the rebuild, but dropped back down to 5 after the rebuild completed.  The line at less than 1 ms response time straight across the bottom is the write latency. Yes it’s that good.

So – there was a tiny bit of performance degradation for the reads but I wouldn’t say that it “killed” performance as a competitor alleged.

The rebuild time is a tad faster than 30 hours as well (look at the graph :) ) but then again the box used faster, 15K drives (and smaller, 300GB vs 500GB), so before anyone complains, it’s not apples-to-apples compared to the Demartek report.

I just wanted to illustrate a real example from a real test at a real customer using a real application, and show the real effects of drive failures in a properly-implemented RAID-DP system.

The FUD-busting will continue, stay tuned…

D

Vendor FUD-slinging: at what point should legal action be taken? And who do you believe as a customer?

I’m all for a good fight, but in the storage industry it seems that all too many creative liberties are taken when competing.

Let’s assume, for a moment, that we’re talking about the car industry instead. I like cars, and I love car analogies. So we’ll use that, and it illustrates the absurdity really well.

The competitors in this example will be BMW and Mercedes. Nobody would argue that they are two of the most prominent names in luxury cars today.

BMW has the high-performance M-series. Let’s take as an example the M6 – a 500HP performance coupe. Looks nice on paper, right?

Let’s say that Mercedes has this hypothetical new marketing campaign to discredit BMW, with the following claims (I need to, again, clarify that this campaign is entirely fictitious, and used only to illustrate my point, lest I get attacked by their lawyers):

  1. Claim the M6 doesn’t really have 500HP, but more like 200HP.
  2. Claim the M6 only does 0-60 in under 5 seconds with only 5% of the gas tank filled, a 50lb driver, downhill, with a tail wind and help from nitrous.
  3. Claim that if you fill the gas tank past 50%, performance will drop so the M6 does 0-60 in more like 30 seconds. Downhill.
  4. Claim that it breaks like clockwork past 5K miles.
  5. Claim that they have one, they tested it, and performs as they say.
  6. Claim that, since they are Mercedes, the top name in the automotive industry, you should trust them implicitly.

Imagine Mercedes, at all levels, going to market with this kind of information – official company announcements, messages from the CEO, company blogs, engineers, sales reps, dealer reps and mechanics.

Now, imagine BMW’s reaction.

How quickly do you think they’d start suing Mercedes?

How quickly would they have 10 independent authorities testing 10 different M6 cars, full of gas, in uphill courses, with overweight drivers, just to illustrate how absurd Mercedes’ claims are?

How quickly would Mercedes issue a retraction?

And, to the petrolheads among us: wouldn’t such a stunt look like Mercedes is really, really afraid of the M6? And don’t we all know better?

More to the point – do you ever see Mercedes pulling such a stunt?

Ah, but you can get away with stuff like that in the storage industry!

Unfortunately, the storage industry is rife with vendors claiming all kinds of stuff about each other. Some of it is or was true, much of it is blown all out of proportion, and some is blatant fabrication.

For instance, XIV breaking if you pull 2 disks out as I state in a previous post, it’s possible if the right 2 drives fail within a few minutes of each other. I think it’s unacceptable, even though it’s highly unlikely to happen in real life. But I’ve seen sales campaigns against the XIV use this as the mantra, to the point that the fallacy is finally stated: “ANY 2 drive failure will bring down the system”.

Obviously this is not true and IBM can demonstrate how untrue that is. Still, it may slow down the IBM campaign.

Other fallacies are far more complicated to prove wrong, unfortunately.

An example: Pillar Data has an asinine yet highly detailed report by Demartek showing NetApp and EMC arrays having significantly lower rebuild speeds than Pillar (as if that’s the most important piece of data management, but anyway, rebuild speed hasn’t helped Pillar sales much, even if it’s true).

To anyone that knows how to configure NetApp and EMC, they’d see that the Pillar box was correctly configured, whereas the others intentionally made to look 4x worse (in the case of NetApp, they literally went against not just best practices but blatantly against system defaults in order to make it slower). However, some CIOs might read this and give credence to it, since they don’t know the details and don’t read past the first graph.

For EMC and NetApp to dispute this, they have to go to the trouble of configuring, properly, a similar system, and running similar tests, then writing a detailed and coherent response. It’s like wounding the enemy soldier instead of killing them, their squadmates have to help them out, wasting manpower. I get it – it’s effective in war. But is it legal in the business world?

Last but not least: EMC and HP, at the very least, have anti-NetApp reports, blogs, PPTs etc. that literally look just like the absurd Mercedes/BMW example above, sometimes worse. Some of it was true a long time ago (the famous FUD “2x + snap delta” space requirement for LUNs is really “1x + snap delta” and has been for years), some of it is pure fabrication (“it slows down to 5% of its original speed if you fill it up!”). See here for a good explanation.

Of course, again that’s like wounding the enemy soldiers: NetApp engineers have to go and defend their honor, show all kinds of reports, customer examples, etc etc. Even so, at some point many CIOs will just say “I trust EMC/HP, I’ve been buying their stuff forever, I’ll just keep buying it, it works”. The FUD is enough to make many people that were just about to consider something else, go running back to mama HP.

Should NetApp sue? I’ve seen some of the FUD examples and literally they are not just a bit wrong but magnificently, spectacularly, outrageously wrong. Is that slander? Tortuous interference? Simply a mistake? I’m sure some lawyer, somewhere, knows the answer. Maybe that lawyer needs to talk to some engineers and marketing people.

Let’s flip the tables:

If NetApp went ahead and simply claimed an EMC CX4-960 can only really hold 450TB, what would EMC do?

I can only imagine the insanity that would ensue.

I’ll finish with something simple from the customer standpoint:

NetApp sold 1 Exabyte of enterprise storage last year, if it was as bad as the other (obviously worried) vendors are saying, does that mean all those customers buying it by the truckload and getting all those efficiencies and performance are stupid and wasted their money?

D

Pillar claiming their RAID5 is more reliable than RAID6? Wizardry or fiction?

Competing against Pillar at an account. One of the things they said: That their RAID5 is superior in reliability to RAID6. I wanted to put this on the public domain and, if true, invite Pillar engineers to comment here and explain how it works for all to see. If untrue, again I invite the Pillar engineers to comment and explain why it’s untrue.

The way I see it: very simply, RAID5 is N+1 protection, RAID6 is N+2. Mathematically, RAID5 is about 4,000 times more likely to lose data than a RAID6 group with the same number of data disks. Even RAID10 is about 160 times more likely to lose data than RAID6.

The only downside to RAID6 is performance – if you want the protection of RAID6 but with extremely high performance then look at NetApp, the RAID-DP NetApp employs by default has in many cases better performance than RAID10 even. Oracle has several PB of DB’s running on NetApp RAID-DP. Can’t be all that bad.

See here for some info…

D

What if you could dramatically improve your application testing times? What would happen to your productivity and to the company’s bottom line?

So, let’s say the DBA (or insert some other discipline) wants to do some testing for a new product (known to happen occasionally) – and the way he would really like to test is to create 20 test cases, which requires 20 copies of the main database. He would then automate the test and therefore get results very quickly.

He approaches the storage admin with the problem, only to be told this isn’t possible since there isn’t enough space on the array. The DBA goes back to his cube frustrated, and figures out some ghetto way of creating at least 1 copy of the database, which creates the following problems:

  1. He has to figure out a way to do it (takes time)
  2. He can only test 1 case at a time (time)
  3. He cannot easily compare what-if scenarios between test cases (lack of flexibility)
  4. His ghetto way of doing it may involve single 1TB disks in a workstation (lack of reliability, time)

Ultimately, the testing takes longer, is error-prone, and the DBA’s productivity level goes way down.

What if the storage admin could, instead, tell the DBA that he can even take hundreds of copies of the DB, there’s no issue doing that?

What would happen to the DBA’s productivity?    

What new ideas would he be able to come up with?

How would that affect the quality of the product?

How would that affect the company’s bottom line? Being able to go to market with improved quality and quicker than the competition?

You see, intelligent storage – intelligently deployed – can solve many more problems than just “give me some space” or “give me more performance”.

There aren’t many technologies out there that can comfortably do this, which is probably why most storage people aren’t aware of this. But an array that can create space- and performance-efficient application-consistent DB clones is the ticket. Being able to create full copies and/or virtual space-efficient copies that end up being unusably slow doesn’t count… :)

The only vendor I know of that can pull this off (properly) is NetApp with their FlexClone technology. One can even use it to deploy thousands of identical VMs… there are some use cases for that, too :)

Activision (the company that makes the famous Guitar Hero game) is a good example of using this technology to rapidly accelerate development – and ended up making the Christmas deadline, which resulted in several more millions in sales. See here.

Oracle is another small company that uses this technology pervasively.

If anyone else knows of more vendors that can do this (properly) please chime in.

D

Should techies or business owners decide on technology (or both)?

It’s no secret that, in most companies, the technology folks are primarily the ones deciding on which new technologies to adopt – after all, they are the ones that understand the technology, right? Business owners explain the business problem to the technologists, and the techies take it from there – and ultimately present 2-3 different solutions that will work and the business picks the cheapest.

This could be great – if it weren’t for the fact that, like everyone, techies have their own agenda, which ends up tainting the decision process. Consider some of the following:

  • Comfort level with existing vendor (if it ain’t broke why fix it? This assumes all of the vendor’s products work equally well)
  • Job security (“why learn something new? Maybe they’ll hire someone that already knows this!”)
  • Delusions of grandeur (“I have the power!”)
  • Fear (“it sounds amazing, but what if the stuff doesn’t work?)
  • Disbelief (“my current gear can’t do this, there’s no way this new stuff is that good!”)
  • Laziness (“you mean I have to test this new stuff? It cuts into my online gaming time!”)
  • Envy (“my buddy at this other company has this stuff, I must have something cooler/bigger!”)
  • Lack of time (“I really don’t have the time to test this new stuff!”)
  • Vendor kickbacks (we all know it happens in one form or another, and to the perennially under-paid techies, an expensive gift may be something they will never otherwise be able to afford, so it gains huge importance in their eyes)
  • The inability to grasp the real business drivers
  • The inability to think strategically
  • Being wowed by “cool” features that are of dubious business importance (see other post here)
  • Conversely, not understanding features that could be of immense business importance, that could save the company millions and increase productivity tenfold.

Of course, someone like a CIO or CTO normally acts as the bridge that spans the techie and business worlds, but of course that doesn’t always work (see here).

The only way around the issue is to create a new decision process for the company, one that involves all the interested parties from all departments. As complex as it may sound, this does work, and most of the time new ideas/issues get unearthed (“what do you mean my database is not backed up now?” or “what do you mean it would take 2 weeks to recover my lab environment?”)

Try it, you may be surprised at what happens!

D