Just a quick post to address something many people either get wrong or just live with due to convenience.
In summary: Please, let’s stop using average I/O sizes to characterize storage system performance. It’s wrong and doesn’t describe how the real world works. Using an average number is as bad as using small block 100% read numbers shown in vanity benchmarks. Neither is representative of real life.
Using a single I/O size for benchmarking became a practice for a vanity benchmark and to provide a level playing field to compare multiple products.
But, ultimately, even though the goal of comparing different systems is desirable, using a single I/O size is fundamentally flawed.
Continue reading “Why it is Incorrect to use Average Block Size for Storage Performance Benchmarking”
I’m seeing some really “out there” marketing lately, every vendor (including us) trying to find an angle that sounds exciting while not being an outright lie (most of the time).
A competitor recently claimed an industry first of up to 1.7 million (undefined type) IOPS in a single rack.
The number (which admittedly sounds solid), got me thinking. Was the “industry first” that nobody else did up to 1.7 million IOPS in a single rack?
Continue reading “Marketing fun: NetApp industry first of up to 13 million IOPS in a single rack”
I really resisted using the “flash in the pan” phrase in the title… first, because the term is overused and second, because I don’t believe solid state is of limited value. On the contrary.
However, I am noticing an interesting trend among some newcomers in the array business, desperate to find a flash niche to compete in:
Writing their storage OS around very specific NAND flash technologies. Almost as bad as writing an entire storage OS to support a single hypervisor technology, but that’s a story for another day.
Continue reading “Are some flash storage vendors optimizing too heavily for short-lived NAND flash?”
As the self-proclaimed storage vigilante, I will keep bringing these idiocies up as I come across them.
So, the latest “thing” now is selling systems using “Raw IOPS” numbers.
Simply put, some vendors are selling based on the aggregate IOPS the system will do based on per-disk statistics and nothing else.
They are not providing realistic performance estimates for the proposed workload, with the appropriate RAID type and I/O sizes and hot vs cold data and what the storage controller overhead will be to do everything. That’s probably too much work.
Continue reading “So now it is OK to sell systems using “Raw IOPS”???”
<I understand this extremely long post is redundant for seasoned storage performance pros – however, these subjects come up so frequently, that I felt compelled to write something. Plus, even the seasoned pros don’t seem to get it sometimes… 🙂 >
IOPS: Possibly the most common measure of storage system performance.
IOPS means Input/Output (operations) Per Second. Seems straightforward. A measure of work vs time (not the same as MB/s, which is actually easier to understand – simply, MegaBytes per Second).
How many of you have seen storage vendors extolling the virtues of their storage by using large IOPS numbers to illustrate a performance advantage?
How many of you decide on storage purchases and base your decisions on those numbers?
However: how many times has a vendor actually specified what they mean when they utter “IOPS”? 🙂
For the impatient, I’ll say this: IOPS numbers by themselves are meaningless and should be treated as such. Without additional metrics such as RAID type, randomness, latency, read vs write % and I/O size (to name a few), an IOPS number is useless.
And now, let’s elaborate… (and, as a refresher regarding the perils of ignoring such things when it comes to sizing, you can always go back here).
Continue reading “An explanation of IOPS and latency”