So now it is OK to sell systems using “Raw IOPS”???

As the self-proclaimed storage vigilante, I will keep bringing these idiocies up as I come across them.

So, the latest “thing” now is selling systems using “Raw IOPS” numbers.

Simply put, some vendors are selling based on the aggregate IOPS the system will do based on per-disk statistics and nothing else.

They are not providing realistic performance estimates for the proposed workload, with the appropriate RAID type and I/O sizes and hot vs cold data and what the storage controller overhead will be to do everything. That’s probably too much work.

For example, if one assumes 200x IOPS per disk, and 200 such disks are in the system, this vendor is showing 40,000 “Raw IOPS”.

This is about as useful as shoes on a snake. Probably less.

The reality is that this is the ultimate “it depends” scenario, since the achievable IOPS depend on far more than how many random 4K IOPS a single disk can sustain (just doing RAID6 could result in having to divide the raw IOPS by 6 where random writes are concerned – and that’s just one thing that affects performance, there are tons more!)

Please refer to prior articles on the subject such as the IOPS/latency primer here and undersizing here. And some RAID goodness here.

If you’re a customer reading this, you have the ultimate power to keep vendors honest. Use it!


Technorati Tags: ,

5 Replies to “So now it is OK to sell systems using “Raw IOPS”???”

  1. Hi D,

    I am a regular visitor to your blog and have tried to understand your great tech posts …. Especially the IOPS PRIMER. now I work with netapp storage and people tell me that netapp consumes / wastes too much storage …

    My reply to that is that I ask them questions : like what do you want to use it for ?I tell them the double disk failure protection without performance penalty… I tell them about storage efficiency features with which they will forget how much storage was sacrificed…

    It will be great if you could do a blog post about two topics …..

    How to size for e.g. if I want to have 8tb of usable space on a netapp box with dual controllers ? And another example of 20tb of SAS usable …. This will be great.

    How would you explain in your words the cost effectiveness of a netapp box investment ?

    Am learning always and have read quiet a few posts of yours … I want to do netapp sizing and storage calculation like a pro. I love netapp and ontap so I can take a lot of hatred and then turn it in favor of netapp. will be greatful for your expert insight and post on my requested topic

    1. Thanks for the kind words. It’s kinda off-topic but I’d already posted something about usable space before:

      If I were you I’d work with your local NetApp engineers, they will teach you how to do all this.

      But the short answer is – sizing for usable space is not the way to do it in general, and especially with the more intelligent storage out there.

      I’d size for performance first, and then factor in space savings via clones, dedupe and compression, but add on top estimated capacity needed to satisfy things like snapshots, and arrive to a final figure.

      So, depending on the problem being solved, 8TB usable could become 100TB logical…


  2. Been loving your site since I followed a post you made on Spiceworks. Currently I’m working with a vendor (rhymes with Hell) that is doing exactly this: sizing based on raw IOPs. They were going to sell me a 700 IOPS system as their tool says our usage is under 300 in 65% of daily usage, but I opted for their 3000 build. From what their tools have shown, our network currently has pretty low usage, so I think we should be good for the next 5 years on their solution. We have a couple DBs, but for the most part it’s all file-based. This is going to change a bit in the upcoming years, but that leads to my next question.

    I was wondering if you’ve ever had any experience with the Equalogics systems, and what you think of their tier-per-appliance approach. Have you dealt with any networks with more than one appliance? They’re *willing* to sell me another appliance to handle hot data, and they say it auto-tiers… but is it actually capable of doing this in any real-world scenarios?

    I know the focus here is mostly on NetApp and EMC, but I’d be curious to know what you think of D… erm, Hell.

    1. Howdy,

      Well – what sort of IOPS – per my article here:

      In general, 700 IOPS sounds kinda low so almost anything should be able to do it, assuming those are your typical random 4-16K IOPS.

      Regarding Equallogic – I don’t encounter them much lately, especially after Dell went private. The typical issues are it won’t comfortably scale past 2-3 boxes but if your performance needs are so low then you don’t really need tiering.

      Not sure how well it works with tiering. Tiering is almost a philosophical argument. At NetApp we prefer caching since that’s the most easy, time-proven method to improve performance.

      You see – there ARE public places a storage vendor can prove their tiering/caching/whatever works. Like, check the SPC-1 benchmark for example.

      Notice who’s missing.

      In addition – if we forget about what seems like low I/O requirements, what else do you want to do with your storage?

      And does Equallogic protect against things like lost disk writes by offering end-to-end integrity? (this goes far beyond RAID).

      What about other data protection constructs like snaps and clones, how efficient are they with Equallogic and their 15MB page size? (ask what happens to snapshot/replication disk sizing if you keep, say, 1,000 snapshots overall).

      1. All good questions. Going down the list:

        I can’t even begin to give you a precise description of our IOPs load. DPACK tells me 700 IOPS is peak with 16k-104k random IOPS. This left out a couple of our systems but goes over the most important ones.

        We’ve only dealt with a couple Equallogic boxes so far. They seem to be pretty decent units altogether. I’m considering the future needs of the company as well. We’re going to a Sharepoint-like filesystem soon, and I’d like to have the ability to add faster storage if the current box proves to be too slow. As long as the auto-tiering works, being limited to two boxes is fine.

        I’ll stay out of philosophical arguments for now ;). SAN isn’t my area of expertise currently, but I’ll keep that in mind for the future.

        I’ll check SPC-1. Are there other sites you would recommend as well?

        Honestly, bulk storage is all we really need right now. We were using a Powervault, and if it wasn’t for a deal we’re getting, the next unit would’ve been a Powervault as well. Good question on what else we really need. The more I research storage, the more fun features I find myself wanting to play with… but as for what we need, it’s pretty basic.

        FluidFS seems to be their answer for end-to-end integrity. I’m planning on not needing snaps/clones/replication as our backup solution will fill in most of these roles. Currently we’re not planning on a new SAN for quite a while, clones aren’t really necessary at our size. The block size was actually something I caught onto early. It’s one of the reasons why we’re going with the type of backup solution we are (near constant data protection). I know this will cause heavier traffic, which is one of the reasons I oversized the SAN.

        As for snaps specifically (per Dell’s site):
        128 per volume / up to 2,048 total per group

        Thanks for the response. It’s given me a chance to go over and solidify a couple points in my mind. for the build. I don’t have the option for NetApp this time around but I will definitely look more deeply into it for future projects.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.