Category Archives: Rants

Is convenience devaluing products? Does quality suffer because of it?

Kind of a long hiatus posting (far too busy working on cool stuff) and for you looking for a deep technical post this may not be it… but here goes anyway since the content may also apply to my more usual subjects.

Recently I decided to discard my Luddite membership card and join the hordes of people using network-based services for music.

The experiment is ongoing – I do like the convenience of being able to select almost any song or album for the monthly equivalent cost of less than what an album is worth.

It’s a pretty good deal if you listen to a lot of new music, and/or you don’t like listening to ads on the radio.

There’s a plethora of free offerings but if you are mobile and want to use it on your phone, there’s usually a cost involved to have the convenience of selecting the exact songs you like.

How convenience has affected me

I did notice several interesting aspects in which this newfound convenience has changed my listening habits in a positive way:

  1. I am discovering a lot more new music since it’s so incredibly easy to do so. And some old music I never gave a chance to.
  2. Sharing music with other people is easy and involves no illegal copying of data.
  3. I don’t have to worry about putting the “right” music in my portable device – I can stream what I want, from wherever I want, even on devices I don’t own, and even designate items for “offline use” – meaning they’ll be cached and playable even if I’m not connected to a network.
  4. I have easy access to most of my oldie favorites that I might normally not keep in my device due to space reasons.
  5. The quality is very good. But we won’t go into psychoacoustics here :)

However, there have also been some pretty negative aspects to all this convenience… for instance:

  1. I realize I now suffer from music ADD – I seldom just sit down and listen to a whole album like we all used to do in the olden days.
  2. Albums now have zero monetary value in my mind – they’re just part of the low monthly fee.
  3. If albums have a perceived zero monetary value, they become a commodity and not something to be treasured. I remember when it was a huge deal to get a new album from my favorite artists: the anticipation, the excitement, the trip to the record store, waiting in line, scarcity, the artwork in the packaging, the sheer physicality of it all. This combination of attributes ensured I would at least give that album a chance – indeed, I was likely to listen to it repeatedly, analyze it and appreciate the artistry involved. I was invested.
  4. As a result of this devaluing, amazing works of art that were extremely difficult to accomplish may now be skipped altogether because they may be a bit time-consuming or even difficult to “get into” – some concept albums you just need to be in the right frame of mind for and/or have the requisite amount of time to listen to the story unfold. Since there’s no perceived investment and no excitement, it’s less likely to spend the energy trying to get into the album, no matter how rewarding it may be in the end.
  5. For something more practical: The toll on the mobile devices’ batteries is 2-3x that of just playing music natively (even without streaming – the tracks are encrypted so you can’t just lift them from the storage, which adds CPU cycles to decrypt, plus some products use codecs more computationally intensive than mp3). Best have a device with a fast CPU.
  6. An extended unplanned network or music provider outage will mean no access to music.

How this applies to other aspects of our lives

I wonder now what other conveniences have affected our lives significantly?

And are we all looking for that quick fix, the easy way out?

Are we heading towards the world depicted in the movie Idiocracy? (very interesting flick – it’s worth watching for the premise alone).

Already, most of us in the more “civilized” parts of the globe don’t know how to hunt down and skin an animal, build a weapon, start a fire, build a shelter. That is knowledge that convenience robbed us of many years ago. You can study how to do those things, but chances are, if you’re in need to do so, you won’t have the training to be anywhere near as successful as our ancestors were in those endeavors.

Same goes for taking pictures – aside from a few people that still develop and print their own film, most of us use digital (with the same deluge of information problem described in the music section above – thousands of pictures may now be taken during a vacation, where previously no more than a hundred would, with tremendous love and care – but most of the hundred were keepers).

Many of us are getting heavier, too – convenient access to food and low levels of physical activity (since locomotion is so convenient) being the killer combination.

Does quality suffer because of convenience?

Conveniences aren’t a bad thing overall – I am not hankering for the destruction of all things convenient. However, I posit that certain aspects of quality absolutely suffer because of convenience:

  1. Consumers are more likely to pick an easier to use, throw-away and even short-sighted product over a better-engineered, longer-lasting one – shifting the engineering emphasis instead to ease-of-use and disposability.
  2. The quality of workers in many fields isn’t what it used to be.
  3. We are heading towards more generalists and less specialists.
  4. Troubleshooting is becoming a lost art.

I’m not sure how to even conclude – I’m probably part of the problem since one of the things I do is help make very advanced technology easier to consume and more forgiving.

Just don’t get too comfortable.

Toilet chair

Thx

D

How to decipher EMC’s new VNX pre-announcement and look behind the marketing.

It was with interest that I watched some of EMC’s announcements during EMC World. Partly due to competitor awareness, and partly due to being an irrepressible nerd, hoping for something really cool.

BTW: Thanks to Mark Kulacz for assisting with the proof points. Mark, as much as it pains me to admit so, is quite possibly an even bigger nerd than I am.

So… EMC did deliver something. A demo of the possible successor to VNX (VNX2?), unavailable as of this writing (indeed, a lot of fuss was made about it being lab only etc).

One of the things they showed was increased performance vs their current top-of-the-line VNX7500.

The aim of this article is to prove that the increases are not proportionally as much as EMC claims they are, and/or they’re not so much because of software, and, moreover, that some planned obsolescence might be coming the way of the VNX for no good reason. Aside from making EMC more money, that is.

A lot of hoopla was made about software being the key driver behind all the performance increases, and how they are now able to use all CPU cores, whereas in the past they couldn’t. Software this, software that. It was the theme of the party.

OK – I’ll buy that. Multi-core enhancements are a common thing in IT-land. Parallelization is key.

So, they showed this interesting chart (hopefully they won’t mind me posting this – it was snagged from their public video):

MCX core util arrow

I added the arrows for clarification.

Notice that the chart above left shows the current VNX using, according to EMCmaybe a total of 2.5 out of the 6 cores if you stack everything up (for instance, Core 0 is maxed out, Core 1 is 50% busy, Cores 2-4 do little, Core 5 does almost nothing). This is important and we’ll come back to it. But, currently, if true, this shows extremely poor multi-core utilization. Seems like there is a dedication of processes to cores – Core 0 does RAID only, for example. Maybe a way to lower context switches?

Then they mentioned how the new box has 16 cores per controller (the current VNX7500 has 6 cores per controller).

OK, great so far.

Then they mentioned how, By The Holy Power Of Software,  they can now utilize all cores on the upcoming 16-core box equally (chart above, right).

Then, comes the interesting part. They did an IOmeter test for the new box only.

They mentioned how the current VNX 7500 would max out at 170,000 8K random reads from SSD (this in itself a nice nugget when dealing with EMC reps claiming insane VNX7500 IOPS). And that the current model’s relative lack of performance is due to the fact its software can’t take advantage of all the cores.

Then they showed the experimental box doing over 5x that I/O. Which is impressive, indeed, even though that’s hardly a realistic way to prove performance, but I accept the fact they were trying to show how much more read-only speed they could get out of extra cores, plus it’s a cooler marketing number.

Writes are a whole separate wrinkle for arrays, of course. Then there are all the other ways VNX performance goes down dramatically.

However, all this leaves us with a few big questions:

  1. If this is really all about just optimized software for the VNX, will it also be available for the VNX7500?
  2. Why not show the new software on the VNX7500 as well? After all, it would probably increase performance by over 2x, since it would now be able to use all the cores equally. Of course, that would not make for good marketing. But if with just a software upgrade a VNX7500 could go 2x faster, wouldn’t that decisively prove EMC’s “software is king” story? Why pass up the opportunity to show this?
  3. So, if, with the new software the VNX7500 could do, say, 400,000 read IOPS in that same test, the difference between new and old isn’t as dramatic as EMC claims… right? :)
  4. But, if core utilization on the VNX7500 is not as bad as EMC claims in the chart (why even bother with the extra 2 cores on a VNX7500 vs a VNX5700 if that were the case), then the new speed improvements are mostly due to just a lot of extra hardware. Which, again, goes against the “software” theme!
  5. Why do EMC customers also need XtremeIO if the new VNX is that fast? What about VMAX? :)

Point #4 above is important. For instance, EMC has been touting multi-core enhancements for years now. The current VNX FLARE release has 50% better core efficiency than the one before, supposedly. And, before that, in 2008, multi-core was advertised as getting 2x the performance vs the software before that. However, the chart above shows extremely poor core efficiency. So which is it? 

Or is it maybe that the box demonstrated is getting most of its speed increase not so much by the magic of better software, but mostly by vastly faster hardware – the fastest Intel CPUs (more clockspeed, not just more cores, plus more efficient instruction processing), latest chipset, faster memory, faster SSDs, faster buses, etc etc. A potential 3-5x faster box by hardware alone.

It doesn’t quite add up as being a software “win” here.

However – I (or at least current VNX customers) probably care more about #1. Since it’s all about the software after all:)

If the new software helps so much, will they make it available for the existing VNX? Seems like any of the current boxes would benefit since many of their cores are doing nothing according to EMC. A free performance upgrade!

However… If they don’t make it available, then the only rational explanation is that they want to force people into the new hardware – yet another forklift upgrade (CX->VNX->”new box”).

Or maybe that there’s some very specific hardware that makes the new performance levels possible. Which, as mentioned before, kinda destroys the “software magic” story.

If it’s all about “Software Defined Storage”, why is the software so locked to the hardware?

All I know is that I have an ancient NetApp FAS3070 in the lab. The box was released ages ago (2006 vintage), and yet it’s running the most current GA ONTAP code. That’s going back 3-4 generations of boxes, and it launched with software that was very, very different to what’s available today. Sometimes I think we spoil our customers.

Can a CX3-80 (the beefiest of the CX3 line, similar vintage to the NetApp FAS3070) take the latest code shown at EMC World? Can it even take the code currently GA for VNX? Can it even take the code available for CX4? Can a CX4-960 (again, the beefiest CX4 model) take the latest code for the shipping VNX? I could keep going. But all this paints a rather depressing picture of being able to stretch EMC hardware investments.

But dealing with hardware obsolescence is a very cool story for another day.

D

 

Technorati Tags: , , , , , , ,

So now it is OK to sell systems using “Raw IOPS”???

As the self-proclaimed storage vigilante, I will keep bringing these idiocies up as I come across them.

So, the latest “thing” now is selling systems using “Raw IOPS” numbers.

Simply put, some vendors are selling based on the aggregate IOPS the system will do based on per-disk statistics and nothing else

They are not providing realistic performance estimates for the proposed workload, with the appropriate RAID type and I/O sizes and hot vs cold data and what the storage controller overhead will be to do everything. That’s probably too much work. 

For example, if one assumes 200x IOPS per disk, and 200 such disks are in the system, this vendor is showing 40,000 “Raw IOPS”.

This is about as useful as shoes on a snake. Probably less.

The reality is that this is the ultimate “it depends” scenario, since the achievable IOPS depend on far more than how many random 4K IOPS a single disk can sustain (just doing RAID6 could result in having to divide the raw IOPS by 6 where random writes are concerned – and that’s just one thing that affects performance, there are tons more!)

Please refer to prior articles on the subject such as the IOPS/latency primer here and undersizing here. And some RAID goodness here.

If you’re a customer reading this, you have the ultimate power to keep vendors honest. Use it!

D

Technorati Tags: ,

An explanation of IOPS and latency

<I understand this extremely long post is redundant for seasoned storage performance pros – however, these subjects come up so frequently, that I felt compelled to write something. Plus, even the seasoned pros don’t seem to get it sometimes… :) >

IOPS: Possibly the most common measure of storage system performance.

IOPS means Input/Output (operations) Per Second. Seems straightforward. A measure of work vs time (not the same as MB/s, which is actually easier to understand – simply, MegaBytes per Second).

How many of you have seen storage vendors extolling the virtues of their storage by using large IOPS numbers to illustrate a performance advantage?

How many of you decide on storage purchases and base your decisions on those numbers?

However: how many times has a vendor actually specified what they mean when they utter “IOPS”? :)

For the impatient, I’ll say this: IOPS numbers by themselves are meaningless and should be treated as such. Without additional metrics such as latency, read vs write % and I/O size (to name a few), an IOPS number is useless.

And now, let’s elaborate… (and, as a refresher regarding the perils of ignoring such things wnen it comes to sizing, you can always go back here).

 

One hundred billion IOPS…

drevil

I’ve competed with various vendors that promise customers high IOPS numbers. On a small system with under 100 standard 15K RPM spinning disks, a certain three-letter vendor was claiming half a million IOPS. Another, a million. Of course, my customer was impressed, since that was far, far higher than the number I was providing. But what’s reality?

Here, I’ll do one right now: The old NetApp FAS2020 (the older smallest box NetApp had to offer) can do a million IOPS. Maybe even two million.

Go ahead, prove otherwise.

It’s impossible, since there is no standard way to measure IOPS, and the official definition of IOPS (operations per second) does not specify certain extremely important parameters. By doing any sort of I/O test on the box, you are automatically imposing your benchmark’s definition of IOPS for that specific test.

 

What’s an operation? What kind of operations are there?

It can get complicated.

An I/O operation is simply some kind of work the disk subsystem has to do at the request of a host and/or some internal process. Typically a read or a write, with sub-categories (for instance read, re-read, write, re-write, random, sequential) and a size.

Depending on the operation, its size could range anywhere from bytes to kilobytes to several megabytes.

Now consider the following most assuredly non-comprehensive list of operation types:

  1. A random 4KB read
  2. A random 4KB read followed by more 4KB reads of blocks in logical adjacency to the first
  3. A 512-byte metadata lookup and subsequent update
  4. A 256KB read followed by more 256KB reads of blocks in logical sequence to the first
  5. A 64MB read
  6. A series of random 8KB writes followed by 256KB sequential reads of the same data that was just written
  7. Random 8KB overwrites
  8. Random 32KB reads and writes
  9. Combinations of the above in a single thread
  10. Combinations of the above in multiple threads
…this could go on.

As you can see, there’s a large variety of I/O types, and true multi-host I/O is almost never of a single type. Virtualization further mixes up the I/O patterns, too.

Now here comes the biggest point (if you can remember one thing from this post, this should be it):

No storage system can do the same maximum number of IOPS irrespective of I/O type, latency and size.

Let’s re-iterate:

It is impossible for a storage system to sustain the same peak IOPS number when presented with different I/O types and latency requirements.

 

Another way to see the limitation…

A gross oversimplification that might help prove the point that the type and size of operation you do matters when it comes to IOPS. Meaning that a system that can do a million 512-byte IOPS can’t necessarily do a million 256K IOPS.

Imagine a bucket, or a shotshell, or whatever container you wish.

Imagine in this container you have either:

  1. A few large balls or…
  2. Many tiny balls
The bucket ultimately contains about the same volume of stuff either way, and it is the major limiting factor. Clearly, you can’t completely fill that same container with the same number of large balls as you can with small balls.
IOPS containers

 

 

 

 

 

 

 

 

 

 

 

 

They kinda look like shotshells, don’t they?

Now imagine the little spheres being forcibly evacuated rapildy out of one end… which takes us to…

 

Latency matters

So, we’ve established that not all IOPS are the same – but what is of far more significance is latency as it relates to the IOPS.

If you want to read no further – never accept an IOPS number that doesn’t come with latency figures, in addition to the I/O sizes and read/write percentages.

Simply speaking, latency is a measure of how long it takes for a single I/O request to happen from the application’s viewpoint.

In general, when it comes to data storage, high latency is just about the least desirable trait, right up there with poor reliability.

Databases especially are very sensitive with respect to latency – DBs make several kinds of requests that need to be acknowledged quickly (ideally in under 10ms, and writes especially in well under 5ms). In particular, the redo log writes need to be acknowledged almost instantaneously for a heavy-write DB – under 1ms is preferable.

High sustained latency in a mission-critical app can have a nasty compounding effect – if a DB can’t write to its redo log fast enough for a single write, everything stalls until that write can complete, then moves on. However, if it constantly can’t write to its redo log fast enough, the user experience will be unacceptable as requests get piled up – the DB may be a back-end to a very busy web front-end for doing Internet sales, for example. A delay in the DB will make the web front-end also delay, and the company could well lose thousands of customers and millions of dollars while the delay is happening. Some companies could also face penalties if they cannot meet certain SLAs.

On the other hand, applications doing sequential, throughput-driven I/O (like backup or archival) are nowhere near as sensitive to latency (and typically don’t need high IOPS anyway, but rather need high MB/s).

Here’s an example from an Oracle DB – a system doing about 15,000 IOPS at 25ms latency. Doing more IOPS would be nice but the DB needs the latency to go a lot lower in order to see significantly improved performance – notice the increased IO waits and latency, and that the top event causing the system to wait is I/O:

AWR example

 

 

 

 

 

 

 

 

Now compare to this system (different format this data but you’ll get the point):

Notice that, in this case, the system is waiting primarily for CPU, not storage.

A significant amount of I/O wait is a good way to determine if storage is an issue (there can be other latencies outside the storage of course – CPU and network are a couple of usual suspects). Even with good latencies, if you see a lot of I/O waits it means that the application would like faster speeds from the storage system.

But this post is not meant to be a DB sizing class. Here’s the important bit that I think is confusing a lot of people and is allowing vendors to get away with unrealistic performance numbers:

It is possible (but not desirable) to have high IOPS and high latency simultaneously.

How? Here’s a, once again, oversimplified example:

Imagine 2 different cars, both with a top speed of 150mph.

  • Car #1 takes 50 seconds to reach 150mph
  • Car #2 takes 200 seconds to reach 150mph

The maximum speed of the two cars is identical.

Does anyone have any doubt as to which car is actually faster? Car #1 indeed feels about 4 times faster than Car #2, even though they both hit the exact same top speed in the end.

Let’s take it an important step further, keeping the car analogy since it’s very relatable to most people (but mostly because I like cars):

  • Car #1 has a maximum speed of 120mph and takes 30 seconds to hit 120mph
  • Car #2 has a maximum speed of 180mph, takes 50 seconds to hit 120mph, and takes 200 seconds to hit 180mph

In this example, Car #2 actually has a much higher top speed than Car #1. Many people, looking at just the top speed, might conclude it’s the faster car.

However, Car #1 reaches its top speed (120mph) far faster than Car # 2 reaches that same top speed of Car #1 (120mph).

Car #2 continues to accelerate (and, eventually, overtakes Car #1), but takes an inordinately long amount of time to hit its top speed of 180mph.

Again – which car do you think would feel faster to its driver?

You know – the feeling of pushing the gas pedal and the car immediately responding with extra speed that can be felt? Without a large delay in that happening?

Which car would get more real-world chances of reaching high speeds in a timely fashion? For instance, overtaking someone quickly and safely?

Which is why car-specific workload benchmarks like the quarter mile were devised: How many seconds does it take to traverse a quarter mile (the workload), and what is the speed once the quarter mile has been reached?

(I fully expect fellow geeks to break out the slide rules and try to prove the numbers wrong, probably factoring in gearing, wind and rolling resistance – it’s just an example to illustrate the difference between throughput and latency, I had no specific cars in mind… really).

 

And, finally, some more storage-related examples…

Some vendor claims… and the fine print explaining the more plausible scenario beneath each claim:

“Mr. Customer, our box can do a million IOPS!”

512-byte ones, sequentially out of cache.

“Mr. Customer, our box can do a quarter million random 4K IOPS – and not from cache!”

at 50ms latency.

“Mr. Customer, our box can do a quarter million 8K IOPS, not from cache, at 20ms latency!”

but only if you have 1000 threads going in parallel.

“Mr. Customer, our box can do a hundred thousand 4K IOPS, at under 20ms latency!”

but only if you have a single host hitting the storage so the array doesn’t get confused by different I/O from other hosts.

Notice how none of these claims are talking about writes or working set sizes… or the configuration required to support the claim.

 

What to look for when someone is making a grandiose IOPS claim

Audited validation and a specific workload to be measured against (that includes latency as a metric) both help. I’ll pick on HDS since they habitually show crazy numbers in marketing literature.

For example, from their website:

HDS USP IOPS

 

It’s pretty much the textbook case of unqualified IOPS claims. No information as to the I/O size, reads vs writes, sequential or random, what type of medium the IOPS are coming from, or, of course, the latency…

However, that very same box barely breaks 200,000 SPC-1 IOPS with good latency in the audited SPC-1 benchmark:

HDS USP SPC IOPS

 

Last I checked, 200,000 was 20 times less than 4,000,000. Don’t get me wrong, 200,000 low-latency IOPS is a great SPC-1 result, but it’s not 4 million SPC-1 IOPS.

Check my previous article on SPC-1 and how to read the results here. And if a vendor is not posting results for a platform – ask why.

 

Where are the IOPS coming from?

So, when you hear those big numbers, where are they really coming from? Are they just ficticious? Not necessarily. So far, here are just a few of the ways I’ve seen vendors claim IOPS prowess:

  1. What the controller will theoretically do given unlimited back-end resources.
  2. What the controller will do purely from cache.
  3. What a controller that can compress data will do with all zero data.
  4. What the controller will do assuming the data is at the FC port buffers (“huh?” is the right reaction, only one three-letter vendor ever did this so at least it’s not a widespread practice).
  5. What the controller will do given the configuration actually being proposed driving a very specific application workload with a specified latency threshold and real data.
The figures provided by the approaches above are all real, in the context of how the test was done by each vendor and how they define “IOPS”. However, of the (non-exhaustive) options above, which one do you think is the more realistic when it comes to dealing with real application data?

 

What if someone proves to you a big IOPS number at a PoC or demo?

Proof-of-Concept engagements or demos are great ways to prove performance claims.

But, as with everything, garbage in – garbage out.

If someone shows you IOmeter doing crazy IOPS, use the information in this post to help you at least find out what the exact configuration of the benchmark is. What’s the block size, is it random, sequential, a mix, how many hosts are doing I/O, etc. Is the config being short-stroked? Is it coming all out of cache?

Typically, things like IOmeter can be a good demo but that doesn’t mean the combined I/O of all your applications’ performance follows the same parameters, nor does it mean the few servers hitting the storage at the demo are representative of your server farm with 100x the number of servers. Testing with as close to your application workload as possible is preferred. Don’t assume you can extrapolate – systems don’t always scale linearly.

 

Factors affecting storage system performance

In real life, you typically won’t have a single host pumping I/O into a storage array. More likely, you will have many hosts doing I/O in parallel. Here are just some of the factors that can affect storage system performance in a major way:

 

  1. Controller, CPU, memory, interlink counts, speeds and types.
  2. A lot of random writes. This is the big one, since, depending on RAID level, the back-end I/O overhead could be anywhere from 2 I/Os (RAID 10) to 6 I/Os (RAID6) per write, unless some advanced form of write management is employed.
  3. Uniform latency requirements – certain systems will exhibit latency spikes from time to time, even if they’re SSD-based (sometimes especially if they’re SSD-based).
  4. A lot of writes to the same logical disk area. This, even with autotiering systems or giant caches, still results in tremendous load on a rather limited set of disks (whether they be spinning or SSD).
  5. The storage type used and the amount – different types of media have very different performance characteristics, even within the same family (the performance between SSDs can vary wildly, for example).
  6. CDP tools for local protection – sometimes this can result in 3x the I/O to the back-end for the writes.
  7. Copy on First Write snapshot algorithms with heavy write workloads.
  8. Misalignment.
  9. Heavy use of space efficiency techniques such as compression and deduplication.
  10. Heavy reliance on autotiering (resulting in the use of too few disks and/or too many slow disks in an attempt to save costs).
  11. Insufficient cache with respect to the working set coupled with inefficient cache algorithms, too-large cache block size and poor utilization.
  12. Shallow port queue depths.
  13. Inability to properly deal with different kinds of I/O from more than a few hosts.
  14. Inability to recognize per-stream patterns (for example, multiple parallel table scans in a Database).
  15. Inability to intelligently prefetch data.

 

What you can do to get a solution that will work…

You should work with your storage vendor to figure out, at a minimum, the items in the following list, and, after you’ve done so, go through the sizing with them and see the sizing tools being used in front of you. (You can also refer to this guide).

  1. Applications being used and size of each (and, ideally, performance logs from each app)
  2. Number of servers
  3. Desired backup and replication methods
  4. Random read and write I/O size per app
  5. Sequential read and write I/O size per app
  6. The percentages of read vs write for each app and each I/O type
  7. The working set (amount of data “touched”) per app
  8. Whether features such as thin provisioning, pools, CDP, autotiering, compression, dedupe, snapshots and replication will be utilized, and what overhead they add to the performance
  9. The RAID type (R10 has an impact of 2 I/Os per random write, R5 4 I/Os, R6 6 I/Os – is that being factored?)
  10. The impact of all those things to the overall headroom and performance of the array.

If your vendor is unwilling or unable to do this type of work, or, especially, if they tell you it doesn’t matter and that their box will deliver umpteen billion IOPS – well, at least now you know better :)

D

Technorati Tags: , , , , , , , , , , , ,

 

Interpreting $/IOPS and IOPS/RAID correctly for various RAID types

<Article updated with more accurate calculation>

There are some impressive new scores at storageperformance.org, with the usual crazy configurations of thousands of drives etc.

Regarding price/performance:

When looking at $/IOP, make sure you are comparing list price (look at the full disclosure report, that has all the details for each config).

Otherwise, you could get the wrong $/IOP since some vendors have list prices, others show heavy discounting.

For example, a box that does $6.5/IOP after 50% discounting, would be $13/IOP using list prices.

Regarding RAID:

As I have mentioned in other posts, RAID plays a big role in both protection and performance.

Most SPC-1 results are using RAID10, with the notable exception of NetApp (we use RAID-DP, mathematically analogous to RAID6 in protection).

Here’s a (very) rough way to convert a RAID10 result to RAID6, if the vendor you’re looking for doesn’t have a RAID6 result, but you know the approximate percentage of random writes:

  1. SPC-1 is about 60% writes.
  2. Take any RAID10 result, let’s say 200,000 IOPS.
  3. 60% of that is 120,000, that’s the write ops. 40% is the reads, or 80,000 read ops.
  4. If using RAID6, you’d be looking at roughly a 3x slowdown for the writes: 120,000/3 = 40,000
  5. Add that to the 40% of the reads and you get the final result:
  6. 80,000 reads + 40,000 writes = 120,000 RAID6-corrected SPC-1 IOPS. Which is not quite as big as the RAID10 result… :)
  7. RAID5 would be the writes divided by 2.
All this happens because one random write can result in 6 back-end I/Os with RAID6, 4 with RAID5 and 2 with RAID10.

Just make sure you’re comparing apples to apples, that’s all. I know we all suffer from ADD in this age of information overload, but do spend some time going through the full disclosure, since there’s always interesting stuff in there…

D

 

 

Slide1

Buyer beware: is your storage vendor sizing properly for performance, or are they under-sizing technologies like Megacaching and Autotiering?

With the advent of performance-altering technologies (notice the word choice), storage sizing is just not what it used to be.

I’m writing this post because more and more I see some vendors not using scientific methods to size their solution, instead aiming to reach a price point, hoping the technology will work to achieve the requisite performance (and if it doesn’t, it’s sold anyway, either they can give some free gear to make the problem go away, or the customer can always buy more, right?)

Back in the “good old days”, with legacy arrays one could (and still can) get fairly deterministic performance by knowing the workload required and, given a RAID type, know roughly how many disks would be needed to maintain the required performance in a sustained fashion, as long as the controller and buses were not overloaded.

With modern systems, there is now a plethora of options that can be used to get more performance out of the array, or, alternatively, get the same average performance as before, using less hardware (hopefully for less money).

If anything, advanced technologies have made array sizing more complex than before.

For instance, Megacaches can be used to dramatically change the I/O reaching the back-end disks of the array. NetApp FAS systems can have up to 16TB of deduplication-aware, ultra-granular (4K) and intelligent read cache. Truly a gigantic size, bigger than the vast majority of storage users will ever need (and bigger than many customers’ entire storage systems). One could argue that with such an enormous amount of cache, one could dispense with most disk drives and instead save money by using SATA (indeed, several customers are doing exactly that). Other vendors are following NetApp’s lead and starting to implement similar technologies — simply because it makes a lot of sense.

However…

It is crucial that, when relying on caching, extra care is taken to size the solution properly, if a reduction in the number and speed of the back-end disks is desired.

You see, caches only work well if they can cache the majority of what’s called the active working set.

Simply put, the working set is not all your data, but the subset of the data you’re “touching” constantly over a period of time. For a customer that has, say, a 20TB Database, the true working set may only be something as small as 5% — enabling most of the active data to fit in 1TB of cache. So, during daily use, a 1TB cache could satisfy most of the I/O requirements of the DB. The back-end disks could comfortably be just enough SATA to fit the DB.

But what about the times when I/O is not what’s normally expected? Say, during a re-indexing, or a big DB export, or maybe month-end batch processing. Such operations could vastly change the working set and temporarily raise it from 5% to something far larger — at which point, a 1TB cache and a handful of back-end SATA may not be enough.

Which is why, when sizing, multiple measurements need to be taken, and not just average or even worst-case.

Let’s use a database as an example again (simply because the I/O can change so dramatically with DBs).You could easily have the following I/O types:

  1. Normal use – 20,000 IOPS, all random, 8K I/O size, 80% reads
  2. DB exports — high MB/s, mostly sequential write,large I/O size, relatively few IOPS
  3. Sequential read after random write — maybe data is added to the DB randomly, then a big sequential read (or maybe many parallel ones) are launched.

You see, the I/O profile can change dramatically. If you only size for case #1, you may not have enough back-end disk to sustain the DB exports or the parallel sequential table scans. If you size for case 2, you may think you don’t need much cache since the I/O is mostly sequential (and most caches are bypassed for sequential I/O). But that would be totally wrong during normal operation.

If your storage vendor has told you they sized for what generates the most I/O, then the question is, what kind of I/O was it?

The other new trendy technology (and the most likely to be under-sized) is Autotiering.

Autotiering, simply put, allows moving chunks of data around the array depending on their “heat index”. Chunks that are very active may end up on SSD, whereas chunks that are dormant could safely stay on SATA.

Different arrays do different kinds of Autotiering, mostly based on various underlying architectural characteristics and limitations. For example, on an EMC Symmetrix the chunk size is about 7.5MB. On an HDS VSP, the chunk is about 40MB. On an IBM DS8000, SVC or EMC Clariion/VNX, it’s 1GB.

With Autotiering, just like with caching, the smaller the chunk size, the more efficient the end result will ultimately be. For instance, a 7.5MB chunk could need as little as 3-5%% of ultra-fast disk as a tier, whereas a 1GB chunk may need as much as 10-15%, due to the larger size chunk containing not very active data mixed together with the active data.

Since most arrays write data with a geometric locality of reference (in contrast, NetApp uses geometric and temporal), with large-chunk autotiering you end up with pieces of data that are “hot” that always occupy the same chunk as neighboring “cool” pieces of data. This explains why the smaller the chunk, the better off you are.

So, with a large chunk, this can happen:

Slide1

The array will try to cache as much as it can, then migrate chunks if they are consistently busy or not. But the whole chunk has to move, not just the active bits within the chunk… which may be just fine, as long as you have enough of everything.

So what can you do to ensure correct sizing?

There are a few things you can do to make sure you get accurate sizing with modern technologies.

  1. Provide performance statistics to vendors — the more detailed the better. If we don’t know what’s going on, it’s hard to provide an engineered solution.
  2. Provide performance expectations — i.e. “I want Oracle queries to finish in 1/4th the time compared to what I have now” — and tie those expectations to business benefits (makes it easier to justify).
  3. Ask vendors to show you their sizing tools and explain the math behind the sizing — there is no magic!
  4. Ask vendors if they are sizing for all the workloads you have at the moment (not just different apps but different workloads within each app) — and how.
  5. Ask them to show you what your working set is and how much of it will fit in the cache.
  6. Ask them to show you how your data would be laid out in an Autotiered environment and what bits of it would end up on what tier. How is that being calculated? Is the geometry of the layout taken into consideration?
  7. Do you have enough capacity for each tier? On Autotiering architectures with large chunks, do you have 10-15% of total storage being SSD?
  8. Have the controller RAM and CPU overheads due to caching and autotiering been taken into account? Such technologies do need extra CPU and RAM to work. Ask to see the overhead (the smaller the Autotiering chunk size, the more metadata overhead, for example). Nothing is free.
  9. Beware of sizings done verbally or on cocktail napkins, calculators, or even spreadsheets – I’ve yet to see a spreadsheet model storage performance accurately.
  10. Beware of sizings of the type “a 15K disk can do 180 IOPS” — it’s a lot more complicated than that!
  11. Understand the difference between sequential, random, reads, writes and I/O size for each proposed architecture — the differences in how I/O is done depending on the platform are staggering and can result in vastly different disk requirements — making apples-to-apples comparisons challenging.
  12. Understand the extra I/O and capacity impact of certain CDP/Replication devices — it can be as much as 3x, and needs to be factored in.
  13. What RAID type is each vendor using? That can have a gigantic performance impact on write-intensive workloads (in addition to the reliability aspect).
  14. If you are getting unbelievably low pricing — ask for a contract ensuring upgrade pricing will be along the same lines. “The first hit is free” is true in more than one line of business.
  15. And, last but by no means least — ask how busy the proposed solution will be given the expected workload! It surprises me that people will try to sell a box that can do the workload but will be 90% busy doing so. Are you OK with that kind of headroom? Remember – disk arrays are just computers running specialized software and hardware, and as such their CPU can run out of steam just like anything else.

If this all seems hard — it’s because it is. But see it as due diligence — you owe it to your company, plus you probably don’t want to be saddled with an improperly-sized box for the next 3-5 years, just because the offer was too good to refuse…

D

 

Technorati Tags: , , , , , , , , ,

Examining value for money regarding the SPEC benchmarks

Some of the comments in my previous post asked about $/IOPS and $/TB.

Since SPEC doesn’t require prices to be listed, I did my own analysis.

The NetApp numbers are simply 4x the existing 6240 result, which is what EMC did with their submission, they used 4x separate VNX systems and aggregated the result.

I used this clarifying analogy over at Nigel’s blog to explain why this makes sense before anyone yells “but this is not published”:

A storage system typically has some kind of bottleneck – cluster interconnect, number of drives, bandwidth to the controller, etc.

When you’re testing a single system, you’re ultimately hitting one of those bottlenecks.

If you’re testing multiple systems independent of each other, they do not share the bottlenecks (since they’re separate), and your performance will scale linearly as you add systems.

For example, if 1 truck can hold 10 tons of stuff, 4 like trucks will hold 40 tons of stuff, 10 trucks 100 tons, etc. There’s no limit.

Once you inject a limiting factor (“the trucks all have to fit on a bridge and the bridge can take this much load and it’s this big”) then you will have a limitation on how many trucks you can load and put on that bridge.

EMC tested 4 separate “trucks”. In that same way, I can add up the result of 4 separate NetApp “trucks”. Here are the results:

EMC NetApp Difference
Cost (approx. USD List) 6,000,000 5,000,000 NetApp is over 16% cheaper in absolute terms
SPEC SFS NFS IOPS 497,623 762,700 NetApp is 53% faster in absolute terms
Average Latency (ORT) 0.96 1.17 EMC offers a mere 18% less latency (with less NFS OPS) despite using only SSDs!
Space (TB) 60 343 NetApp offers 5.7 times more usable space
$/SPEC NFS IOPS 12.06 6.56 Netapp is 45.6% less expensive per SPEC NFS operation
$/TB 100,000 14,577 NetApp is less than 1/6 the price of EMC per TB
RAID RAID5 RAID-DP NetApp is thousands of times more reliable
Boxes needed to accomplish result 15 (4x separate VNX, each with 2 controllers, plus a total of 5x Celerra VG8 heads and 2 Control Stations) 8x unified controllers NetApp is far less complex (the benefit of a truly unified architecture)

Who can spot the better deal? Smile

I added the latency in the chart, thanks to my buddy Mark Twomey for pointing it out.

You see, people needing enterprise NAS with that kind of performance usually need speed, plenty of space and high reliability. Not just one of the three. BTW, here’s a paper on relative RAID reliability.

NetApp provides all three, in spades, plus great value for money, a truly simple, flexible unified system, and efficiency.

Most customers want to see how a real configuration performs. I refer customers to our SPEC and SPC results constantly since quite frequently their desired configuration is very similar.

Which makes benchmarking realistic configurations actually useful – imagine that.

Maybe EMC needs to submit results with VNX the way they sell it to people, for example:

  • A mix of SSD cache, SSD, high-speed SAS and high-capacity SAS
  • Autotiering
  • RAID6
  • A typical amount of space for a configuration that size

Then submit results.

Keep your existing result of course, but also show the people how what you actually sell them really performs.

I still don’t understand why this is such a hard concept.

D

Technorati Tags: ,,,,

EMC conclusively proves that VNX bottlenecks NAS performance

A bit of a controversial title, no?

Allow me to elaborate.

EMC posted a new SPEC SFS result as part of a marketing stunt (which is working, look at what I’m doing – I’m talking about them, if only to clear the air).

In simple terms, EMC got almost 500,000 SPEC SFS NFS IOPS (not to be confused with, say, block-based SPC-1 IOPS) with the following configuration:

  1. Four (4) totally separate VNX arrays, each loaded with SSD storage, utterly unaware of each other (8 total controllers since each box has 2)
  2. Five (5) Celerra VG8 NAS heads/gateways (1 spare), one on top of each VNX box
  3. 2 Control Stations
  4. 8 exported filesystems (2 per VG8 head/VNX system)
  5. Multiple pools of storage (at least 1 per VG8) – not shared among the various boxes, no data mobility between boxes
  6. Only 60TB NAS space with RAID5 (or 15TB per box)

Now, this post is not about whether this configuration is unrealistic and expensive (almost nobody would pay $6m for merely 60TB of NAS, not today). I get it that EMC is trying to publish the best possible number by loading a bunch of separate arrays with SSD. It’s OK as long as everyone understands the details.

My beef has to do with how it’s marketed.

EMC is very vague about the configuration, unless you look at the actual SPEC website. In the marketing materials they just mention VNX, as in “The EMC VNX performed at 497,623 SPECsfs2008_nfs.v3 operations per second”. Kinda like saying it’s OK to take 3 5-year olds and a 6-year old to a bar because their age adds up to 21.

No – the far more accurate statement is “four separate VNXs working independently and utterly unaware of each other did 124,405 SPEC fs2008_nfs.v3 operations per second each“.

All EMC did was add up the result of 4 boxes.

Heck, that’s easy to do!

NetApp already has a result for the 6240 (just 2 controllers doing a respectable 190,675 SPEC NFS ops taking care of NAS and RAID all at once since they’re actually unified, no cornucopia of boxes there) without using Solid State Drives (common SAS drives plus a large cache were used instead – a standard, realistic config we sell every day, and not a “lab queen”).

If all we’re doing is adding up the result of different boxes, simply multiply this by 4 (plus we do have Cluster-Mode for NAS so it would count as a single clustered system with failover etc. among the nodes) and end up with the following result:

  1. 762,700 SPEC SFS NFS operations
  2. 8 exported filesystems
  3. 343TB usable with RAID-DP (thousands of times more resilient than RAID5)

So, which one do you think is the better deal? More speed, 343TB and better protection, or less speed, 60TB and far less protection? :)

Customers curious about other systems can do the same multiplication trick for other configs, the sky is the limit!

The other, more serious part, and what prompted me to title the post the way I did, is that EMC’s benchmarking made pretty clear the fact that the VNX is the bottleneck, only able to really support a single VG8 head at top speed, necessitating the need for 4 separate VNX systems to accomplish the final result. So, the fact that a VNX can have up to 8 Celerra heads on top of it means nothing since the back-end is your limiting factor. You might as well stick to a dual-head VG8 config (1 active 1 passive) since that’s all it can comfortably drive (otherwise why benchmark it that way?)

But with only 1 active NAS head you’d be limited to just 256TB max NAS capacity, since that’s how much total space a Celerra head can address as of the time of this writing. Which is probably enough for most people.

I wonder if the NAS heads that can be bought as a package with VNX are slower than VG8 heads, and by how much. You see, most people buying the VNX will be getting the NAS heads that can be packaged with it since it’s cheaper that way. How fast does that go? I’m sure customers would like to know, since that’s what they will typically buy.

I also wonder how fast it would be with RAID6.

Here’s a novel idea: benchmark what customers will actually buy!

So apples-to-apples comparisons can become easier instead of something like this:

Bothapples

For the curious: on the left you see an “Autumn Glory” Malus Floribunda (miniature apple). Photo courtesy of John Fullbright.

D

Technorati Tags: , , , , , , , ,

Questions to ask EMC regarding their new VNX systems…

It’s that time of the year again. The usual websites are busy with news of the upcoming EMC midrange refresh called VNX. And records being broken.

(NEWSFLASH: Watching the webcast now, the record they kept saying they would break ended up being some guy jumping over a bunch of EMC arrays with a motorcycle – and here I was hoping to see some kind of performance record…)

I’m not usually one to rain on anyone’s parade, but I keep seeing the “unified” word a lot, but based on what I’m seeing, it’s all more of the same, albeit with newer CPUs, a different faceplate, and (join the club) SAS. I’m sure the new systems will be faster courtesy of faster CPUs, more RAM and SAS. But are they offering something materially closer to a unified architecture?

Note that I’m not attacking anything in the EMC announcement, merely the continued “unified” claim. I’m sure the new Data Domain, Isilon and Vmax systems are great.

So here are some questions to ask EMC regarding VNX – I’ll keep this as a list instead of a more verbose entry to keep things easy for the ADD-afflicted and allow easier copy-paste into emails :)

  1. Let’s say I have a 100TB VNX system. Let’s say I allocate all 100TB to NAS. Then let’s say that all the 100TB is really chewed up in the beginning but after a year my real data requirements are more like 70TB. Can I take that 30TB I’m not using any more and instantly use it for FC? Since it’s “unified” and all? Without breaking best practices for LUN allocation to Celerra? Or is it forever tied to the NAS part and I have to buy all new storage if I don’t want to destroy what’s there and start from scratch?
  2. Is the VNX (or even the NS before it) 3rd-party verified as an over 5-nines system? (I believe the CX is but is the CX/NS combo?)
  3. How is the architecture of these boxes any different than before? It looks like you still have 2 CX SPs, then some NAS gateways. Seems like very much the same overall architecture and there’s (still) nothing unified about it. I call for some truth in advertising! Only the little VNXe seems materially different (not in the software but in the amount of blades it takes to run it all).
  4. Are the new systems licenced by capacity?
  5. Can the new systems use more than the 2TB of FAST Cache?
  6. On the subject of cache, what is the best practice regarding the minimum number of SSDs to use for cache? Is it 8? How many shelves/buses should they be distributed on?
  7. What is the best practice regarding cache oversubscription and how is this sized?
  8. Since the FAST Cache can also cache writes, what are the ramifications if the cache fails? How many customers have had this happen? After all, we are talking about SSDs, and even mirrored SSDs are much less reliable than mirrored RAM.
  9. What’s the granularity for using RecoverPoint to replicate the NAS piece? Seems like it needs to replicate everything NAS as one chunk as a large consistency group, with Celerra Replicator needed for more granular replication.
  10. What’s the granularity for recovering NAS with RecoverPoint? Seems like you can’t do things by file or by volume even. The entire data mover may need to be recovered in one go, regardless of the volumes within.
  11. When using RecoverPoint, does one need to not use storage pools for certain operations? And what does that mean regarding the complexity of implementation?
  12. Speaking of storage pools, when are they recommended, when not, and why? And what does that mean about the complexity of administration?
  13. What functionality does one lose if one does not use pools?
  14. Can one prioritize FAST Cache in pool LUNs or is cache simply on or off for the entire pool?
  15. Can I do a data-in-place upgrade from CX3 or CX4 or is this a forklift upgrade?
  16. Why is FASTv2 not recommended for Exchange 2010 and various other DBs?
  17. If Autotiering is not really applicable to many workloads, what is it really good for?
  18. What is the percentage of flash needed to properly do autotiering on VNX? (it’s only 3% on VMAX since it uses a 7MB page, but VNX uses a 1GB page, which is far more inefficient). Why is FAST still at the grossly inefficient 1GB chunk?
  19. Can FAST on the VNX exclude certain time periods that can confuse the algorithms, like when backups occur?
  20. Is file-level FAST still a separate system?
  21. Why does the low-end VNXe not offer FC?
  22. Can I upgrade from VNXe to VNX?
  23. Does the VNXe offer FAST?
  24. Can a 1GB chunk span RAID groups or is performance limited to 1 RAID group’s worth of drives?
  25. Why are functions like block, NAS and replication still in separate hardware and software?
  26. Why are there still 2 kinds of snapshotting systems?
  27. Are the block snaps finally without a huge write performance impact? How about the NAS snaps?
  28. Are the snaps finally able to be retained for years if needed?
  29. Why are there 4 kinds of replication? (Mirrorview, Celerra Replicator, Recoverpoint, SAN copy)
  30. Why are there still all these OSes to patch? (Win XP in the SPs, Linux on the Control Station and RecoverPoint, DART on the NAS blades, maybe more if they can run Rainfinity and Atmos on the blades as well)
  31. Why still no dedupe for FC and iSCSI?
  32. Why no dedupe for memory and cache?
  33. Why not sub-file dedupe?
  34. Why is Celerra still limited to 256TB per data mover?
  35. Is Celerra still limited to 16TB per volume? Or is yet another, completely separate system (Isilon) needed to do that?
  36. Is Celerra still limited to not being able to share a volume between data movers? Or is, again, Isilon needed to do that?
  37. Can Celerra non-disruptively move CIFS and NFS volumes between data movers?
  38. Why can there not be a single FCoE link to transfer all the protocols if the boxes are “unified”?
  39. Have the thin provisioning performance overheads been fixed?
  40. Have the pool performance bottlenecks been fixed? Or is it still recommended to use normal RAID LUNs for highest performance?
  41. Can one actually stripe/restripe within a FLARE pool now? When adding storage? With thin provisioning?
  42. What is the best practice for expanding, say, a 50 drive pool? How many drives do I have to expand by? Why?
  43. Does one still need to do a migration to use thin provisioning?
  44. Does one need to do yet another migration to “re-thin” a LUN once it gets temporarily chunky?
  45. Have the RAID5 and RAID6 write inefficiencies been fixed? And how?
  46. Will the benchmarks for the new systems use RAID6 or will they, again, show RAID10? After all, most customers don’t deploy RAID10 for everything, and RAID5 is thousands of times less reliable than RAID6. How about some SPC-1 benchmarks?
  47. Why is EMC still not fessing up to using a filesystem for their new pools? Maybe because they keep saying doing so is not a “real” SAN, even in recent communication?
  48. Since EMC is using a filesystem in order to get functionality in the CX SPs like pools, thin provisioning, compression and auto-tiering (and probably dedupe in the future), how are they keeping fragmentation under control? (how the tables have turned!)

What I notice is a lack of thought leadership when it comes to technology innovation – EMC is still playing catch-up with other vendors in many important architectural areas,  and keeps buying companies left and right to plug portfolio holes. All vendors play catch-up to some extent, the trick is finding the one playing catch-up in the fewest areas and leading in the most, with the fewest compromises.

Some areas of NetApp leadership to answer a question in the comments:

  • First Unified architecture (since 2002)
  • First with RAID that has the space efficiency of RAID5, the performance of RAID10 and the reliability of RAID6
  • First with block-level deduplication for all protocols
  • FIrst with zero-impact snapshots
  • First with Megacaches (up to 16TB cache per system possible)
  • First with VMware integration including VM clones
  • First with space- and time-efficient, integrated replication for all protocols
  • First with snapshot-based archive storage (being able to store different versions of your data for years on nearline storage)
  • First with Unified Connect and FCoE – single cable capability for all protocols (FC, iSCSI, NFS, CIFS)

However, EMC is strong when it comes to marketing, messaging and – wait for it – the management part. Since it’s amazingly difficult to integrate all the technologies EMC has acquired over the years (heck, it’s taking NetApp forever to properly integrate Spinnaker and that’s just one other architecture), EMC is focusing instead on the management of the various bits (the current approach being Unisphere, tying together a subset of EMC’s acquisitions).

So, Unified Storage in EMC-speak really means unified management. Which would be fine if they were upfront about it. Somehow, “our new arrays with unified management but not unified architecture” doesn’t quite roll off the tongue as easily as “unified storage”.

Mike Riley eloquently explains whether it’s easier to fix an architecture or fix management here. Ultimately, unified management can’t tackle all the underlying problems and limitations, but it does allow for some very nice demos.

A cool GUI with frankenstorage behind it is like putting lipstick on a pig, or putting a nice shell on top of a car cobbled together from disparate bits. The underlying build is masked superficially, until it’s not… usually, at the worst possible time.

Sure, ultimately, management is what the end user interfaces with. Many people won’t really care about what goes on inside, nor have the time or inclination to learn. I merely invite them to start thinking more about the inner bits, because when things get tricky is also when something like a portal GUI meshing 4-5 different products together also stops working as expected, and that’s also when you start bouncing between 3-4 completely different support teams all trying to figure out which of the underlying products is causing the problem.

Always think in terms of what happens if something goes wrong with a certain subsystem and always assume things will break – only then can you have proper procedures and be prepared for the worst.

 

 

 

 

 

 

 

 

And always remember that the more complex a machine, the more difficult it can be to troubleshoot and fix when it does break (and it will break – everything does). There’s no substitute for clean and simple engineering.

Of course, Rube Goldberg-esque machines can be entertaining… if entertainment is what you’re after :)

D

 

Technorati Tags: , , , , , , , , , , ,

 

Et tu, Brute? EMC offering capacity guarantees? The sky is falling! Will Chuck resign?

It came to my attention that EMC is offering a 20% efficiency guarantee vs the competition (they seem to be focusing on NetApp as usual but that’s besides the point in this post). See here.

Now, I won’t go ahead and attack their guarantee. Good luck with that, more power to you etc etc. They need all the competitive edge they can get.

No, what I’ll do is expose yet more EMC messaging inconsistency. If you’ve been following the posts in my site you’ll notice that I have absolutely nothing against EMC products – but I do have issues with how they’re sold and marketed and what they’ll say about the competition.

First and foremost: most major storage players, with the notable exception of EMC, have been offering some kind of efficiency guarantee. Sure, you needed to read the fine print to see if your specific use case would be covered (like with every binding document), but at least the guarantees were there. NetApp was first with our 50% efficiency guarantee, then came others (HDS and 3Par are just some that come to mind). We even offer a 35% guarantee if we virtualize EMC arrays :)

We all have different ways of getting the efficiency. NetApp has a combo of deduplication, thin provisioning, snapshots, highly efficient RAID and thin cloning, for instance. Others have a subset (3Par has their really good thin provisioning, for example). Regardless, we all tried to offer some measure of extra efficiency in these hard economic times.

And it’s not just marketing: I have multiple customers that, especially on virtualized environments, save at least 70% (that’s a real 70%, not 70% because we switched them from RAID10 to RAID-DP – literally, a 10TB data set is occupying 3TB). And for deployments like VDI, the savings are in the extreme range.

EMC’s stance was to, at a minimum, ridicule said guarantees. The inimitable Barry Burke (the storage anarchist) had this pretty funny post.

Chuck Hollis has been far more polemic about this – the worst was when he said he’d quit if EMC tried to do something similar (see here in the comments). BTW â we are all waiting for that resignation :) (on a more serious note, Chuck, if you don’t resign because of this, at least refrain from promising next time).

He also called other guarantees “shenanigans” here. I guess he’s really against the idea of guarantees.

But now it’s all good you see, EMC is offering a blanket 20% efficiency guarantee versus the competition! I.e. they will be able to provide 20% more actual usable storage or else they’ll give you free drives to cover the difference. You see, this guarantee is real, not like what all the other companies offer :)

Kidding aside, methinks they’re missing the point – this (to go back to my favorite car analogies) is like saying: :Both our car and your car have a 3-liter engine, but yours has twin turbos and a racing intercooler and 3 times the horsepower but we won’t take any of that into account, we will strictly examine whether you indeed have a 3-liter engine, and we’ll bore ours out to make it 3.6 liters for free”. Alrighty then. I’ll keep my turbos. But how will they deal with an existing NetApp customer that’s getting something like 3x efficiency already? Fulfilling the guarantee terms could get mighty expensive.

If a NetApp customer is getting 3x the usable storage due to deduplication and other means, will EMC come up with the difference or will they just make sure they offer 20% more raw storage?

To the customer, all that matters is how much effective storage they’re able to use, not how much raw storage is in the box.

But, still, this is not what this post is about.

Throughout the years, NetApp and other vendors have offered true innovation on different fronts. Each time that happens, EMC (that also innovates – through acquisition mostly – but likes to act as if nobody else does) employs their usual “minimize and divert” technique. Either they will trivialize the innovation (“who’d want to do that?”) or they will proclaim it false, then divert attention to something they already do (or will do in a few years).

This is even the case for technologies EMC eventually acquired, like Data Domain. Before EMC acquired Data Domain, they disparaged the product, claimed it was the worst kind of device you’d ever want in your datacenter, then tried to sell you the execrable DL3D (AKA Quantum DXi (don’t get me started, the first release was an utter mess).

We all know what happened to that story eventually: at the moment, EMC is offering to swap out existing DL3Ds for free in many cases, and put Data Domain in their place since it’s infinitely better. But wait, weren’t they saying how terrible Data Domain was compared to DL3D?

Some will say this is fine since they’re just trying to compete, and “all is fair”. Personally, if I were approached by sales teams with those about-face tactics, I’d be annoyed.

So, without further ado, I present you with a slide a colleague created. Some of the timing may be a bit off, but the gist should be fairly clear… :)

I could have added a few more lines (Flash Cache, for instance) but it would have made for too busy a slide.

EDIT: I’ll add something I posted as a comment on someone else’s blog that I think is germane.

Since, to provide apples-to-apples protection, EMC HAS to be configured with RAID6, where are the public benchmarks showing EMC RAID6? As you well know, ALL NetApp benchmarks (SPEC, SPC) are with RAID-DP. Any EMC benchmarks around are with RAID10.

Maybe another guarantee is needed:

Provide no worse protection, functionality, space and performance than X competitor.

Otherwise, you’re only tackling a relatively unimportant part of the big picture.

D

Technorati Tags: ,,,,,,,,,,