An explanation of IOPS and latency

<I understand this extremely long post is redundant for seasoned storage performance pros – however, these subjects come up so frequently, that I felt compelled to write something. Plus, even the seasoned pros don’t seem to get it sometimes… :) >

IOPS: Possibly the most common measure of storage system performance.

IOPS means Input/Output (operations) Per Second. Seems straightforward. A measure of work vs time (not the same as MB/s, which is actually easier to understand – simply, MegaBytes per Second).

How many of you have seen storage vendors extolling the virtues of their storage by using large IOPS numbers to illustrate a performance advantage?

How many of you decide on storage purchases and base your decisions on those numbers?

However: how many times has a vendor actually specified what they mean when they utter “IOPS”? :)

For the impatient, I’ll say this: IOPS numbers by themselves are meaningless and should be treated as such. Without additional metrics such as latency, read vs write % and I/O size (to name a few), an IOPS number is useless.

And now, let’s elaborate… (and, as a refresher regarding the perils of ignoring such things wnen it comes to sizing, you can always go back here).

 

One hundred billion IOPS…

drevil

I’ve competed with various vendors that promise customers high IOPS numbers. On a small system with under 100 standard 15K RPM spinning disks, a certain three-letter vendor was claiming half a million IOPS. Another, a million. Of course, my customer was impressed, since that was far, far higher than the number I was providing. But what’s reality?

Here, I’ll do one right now: The old NetApp FAS2020 (the older smallest box NetApp had to offer) can do a million IOPS. Maybe even two million.

Go ahead, prove otherwise.

It’s impossible, since there is no standard way to measure IOPS, and the official definition of IOPS (operations per second) does not specify certain extremely important parameters. By doing any sort of I/O test on the box, you are automatically imposing your benchmark’s definition of IOPS for that specific test.

 

What’s an operation? What kind of operations are there?

It can get complicated.

An I/O operation is simply some kind of work the disk subsystem has to do at the request of a host and/or some internal process. Typically a read or a write, with sub-categories (for instance read, re-read, write, re-write, random, sequential) and a size.

Depending on the operation, its size could range anywhere from bytes to kilobytes to several megabytes.

Now consider the following most assuredly non-comprehensive list of operation types:

  1. A random 4KB read
  2. A random 4KB read followed by more 4KB reads of blocks in logical adjacency to the first
  3. A 512-byte metadata lookup and subsequent update
  4. A 256KB read followed by more 256KB reads of blocks in logical sequence to the first
  5. A 64MB read
  6. A series of random 8KB writes followed by 256KB sequential reads of the same data that was just written
  7. Random 8KB overwrites
  8. Random 32KB reads and writes
  9. Combinations of the above in a single thread
  10. Combinations of the above in multiple threads
…this could go on.

As you can see, there’s a large variety of I/O types, and true multi-host I/O is almost never of a single type. Virtualization further mixes up the I/O patterns, too.

Now here comes the biggest point (if you can remember one thing from this post, this should be it):

No storage system can do the same maximum number of IOPS irrespective of I/O type, latency and size.

Let’s re-iterate:

It is impossible for a storage system to sustain the same peak IOPS number when presented with different I/O types and latency requirements.

 

Another way to see the limitation…

A gross oversimplification that might help prove the point that the type and size of operation you do matters when it comes to IOPS. Meaning that a system that can do a million 512-byte IOPS can’t necessarily do a million 256K IOPS.

Imagine a bucket, or a shotshell, or whatever container you wish.

Imagine in this container you have either:

  1. A few large balls or…
  2. Many tiny balls
The bucket ultimately contains about the same volume of stuff either way, and it is the major limiting factor. Clearly, you can’t completely fill that same container with the same number of large balls as you can with small balls.
IOPS containers

 

 

 

 

 

 

 

 

 

 

 

 

They kinda look like shotshells, don’t they?

Now imagine the little spheres being forcibly evacuated rapildy out of one end… which takes us to…

 

Latency matters

So, we’ve established that not all IOPS are the same – but what is of far more significance is latency as it relates to the IOPS.

If you want to read no further – never accept an IOPS number that doesn’t come with latency figures, in addition to the I/O sizes and read/write percentages.

Simply speaking, latency is a measure of how long it takes for a single I/O request to happen from the application’s viewpoint.

In general, when it comes to data storage, high latency is just about the least desirable trait, right up there with poor reliability.

Databases especially are very sensitive with respect to latency – DBs make several kinds of requests that need to be acknowledged quickly (ideally in under 10ms, and writes especially in well under 5ms). In particular, the redo log writes need to be acknowledged almost instantaneously for a heavy-write DB – under 1ms is preferable.

High sustained latency in a mission-critical app can have a nasty compounding effect – if a DB can’t write to its redo log fast enough for a single write, everything stalls until that write can complete, then moves on. However, if it constantly can’t write to its redo log fast enough, the user experience will be unacceptable as requests get piled up – the DB may be a back-end to a very busy web front-end for doing Internet sales, for example. A delay in the DB will make the web front-end also delay, and the company could well lose thousands of customers and millions of dollars while the delay is happening. Some companies could also face penalties if they cannot meet certain SLAs.

On the other hand, applications doing sequential, throughput-driven I/O (like backup or archival) are nowhere near as sensitive to latency (and typically don’t need high IOPS anyway, but rather need high MB/s).

Here’s an example from an Oracle DB – a system doing about 15,000 IOPS at 25ms latency. Doing more IOPS would be nice but the DB needs the latency to go a lot lower in order to see significantly improved performance – notice the increased IO waits and latency, and that the top event causing the system to wait is I/O:

AWR example

 

 

 

 

 

 

 

 

Now compare to this system (different format this data but you’ll get the point):

Notice that, in this case, the system is waiting primarily for CPU, not storage.

A significant amount of I/O wait is a good way to determine if storage is an issue (there can be other latencies outside the storage of course – CPU and network are a couple of usual suspects). Even with good latencies, if you see a lot of I/O waits it means that the application would like faster speeds from the storage system.

But this post is not meant to be a DB sizing class. Here’s the important bit that I think is confusing a lot of people and is allowing vendors to get away with unrealistic performance numbers:

It is possible (but not desirable) to have high IOPS and high latency simultaneously.

How? Here’s a, once again, oversimplified example:

Imagine 2 different cars, both with a top speed of 150mph.

  • Car #1 takes 50 seconds to reach 150mph
  • Car #2 takes 200 seconds to reach 150mph

The maximum speed of the two cars is identical.

Does anyone have any doubt as to which car is actually faster? Car #1 indeed feels about 4 times faster than Car #2, even though they both hit the exact same top speed in the end.

Let’s take it an important step further, keeping the car analogy since it’s very relatable to most people (but mostly because I like cars):

  • Car #1 has a maximum speed of 120mph and takes 30 seconds to hit 120mph
  • Car #2 has a maximum speed of 180mph, takes 50 seconds to hit 120mph, and takes 200 seconds to hit 180mph

In this example, Car #2 actually has a much higher top speed than Car #1. Many people, looking at just the top speed, might conclude it’s the faster car.

However, Car #1 reaches its top speed (120mph) far faster than Car # 2 reaches that same top speed of Car #1 (120mph).

Car #2 continues to accelerate (and, eventually, overtakes Car #1), but takes an inordinately long amount of time to hit its top speed of 180mph.

Again – which car do you think would feel faster to its driver?

You know – the feeling of pushing the gas pedal and the car immediately responding with extra speed that can be felt? Without a large delay in that happening?

Which car would get more real-world chances of reaching high speeds in a timely fashion? For instance, overtaking someone quickly and safely?

Which is why car-specific workload benchmarks like the quarter mile were devised: How many seconds does it take to traverse a quarter mile (the workload), and what is the speed once the quarter mile has been reached?

(I fully expect fellow geeks to break out the slide rules and try to prove the numbers wrong, probably factoring in gearing, wind and rolling resistance – it’s just an example to illustrate the difference between throughput and latency, I had no specific cars in mind… really).

 

And, finally, some more storage-related examples…

Some vendor claims… and the fine print explaining the more plausible scenario beneath each claim:

“Mr. Customer, our box can do a million IOPS!”

512-byte ones, sequentially out of cache.

“Mr. Customer, our box can do a quarter million random 4K IOPS – and not from cache!”

at 50ms latency.

“Mr. Customer, our box can do a quarter million 8K IOPS, not from cache, at 20ms latency!”

but only if you have 1000 threads going in parallel.

“Mr. Customer, our box can do a hundred thousand 4K IOPS, at under 20ms latency!”

but only if you have a single host hitting the storage so the array doesn’t get confused by different I/O from other hosts.

Notice how none of these claims are talking about writes or working set sizes… or the configuration required to support the claim.

 

What to look for when someone is making a grandiose IOPS claim

Audited validation and a specific workload to be measured against (that includes latency as a metric) both help. I’ll pick on HDS since they habitually show crazy numbers in marketing literature.

For example, from their website:

HDS USP IOPS

 

It’s pretty much the textbook case of unqualified IOPS claims. No information as to the I/O size, reads vs writes, sequential or random, what type of medium the IOPS are coming from, or, of course, the latency…

However, that very same box barely breaks 200,000 SPC-1 IOPS with good latency in the audited SPC-1 benchmark:

HDS USP SPC IOPS

 

Last I checked, 200,000 was 20 times less than 4,000,000. Don’t get me wrong, 200,000 low-latency IOPS is a great SPC-1 result, but it’s not 4 million SPC-1 IOPS.

Check my previous article on SPC-1 and how to read the results here. And if a vendor is not posting results for a platform – ask why.

 

Where are the IOPS coming from?

So, when you hear those big numbers, where are they really coming from? Are they just ficticious? Not necessarily. So far, here are just a few of the ways I’ve seen vendors claim IOPS prowess:

  1. What the controller will theoretically do given unlimited back-end resources.
  2. What the controller will do purely from cache.
  3. What a controller that can compress data will do with all zero data.
  4. What the controller will do assuming the data is at the FC port buffers (“huh?” is the right reaction, only one three-letter vendor ever did this so at least it’s not a widespread practice).
  5. What the controller will do given the configuration actually being proposed driving a very specific application workload with a specified latency threshold and real data.
The figures provided by the approaches above are all real, in the context of how the test was done by each vendor and how they define “IOPS”. However, of the (non-exhaustive) options above, which one do you think is the more realistic when it comes to dealing with real application data?

 

What if someone proves to you a big IOPS number at a PoC or demo?

Proof-of-Concept engagements or demos are great ways to prove performance claims.

But, as with everything, garbage in – garbage out.

If someone shows you IOmeter doing crazy IOPS, use the information in this post to help you at least find out what the exact configuration of the benchmark is. What’s the block size, is it random, sequential, a mix, how many hosts are doing I/O, etc. Is the config being short-stroked? Is it coming all out of cache?

Typically, things like IOmeter can be a good demo but that doesn’t mean the combined I/O of all your applications’ performance follows the same parameters, nor does it mean the few servers hitting the storage at the demo are representative of your server farm with 100x the number of servers. Testing with as close to your application workload as possible is preferred. Don’t assume you can extrapolate – systems don’t always scale linearly.

 

Factors affecting storage system performance

In real life, you typically won’t have a single host pumping I/O into a storage array. More likely, you will have many hosts doing I/O in parallel. Here are just some of the factors that can affect storage system performance in a major way:

 

  1. Controller, CPU, memory, interlink counts, speeds and types.
  2. A lot of random writes. This is the big one, since, depending on RAID level, the back-end I/O overhead could be anywhere from 2 I/Os (RAID 10) to 6 I/Os (RAID6) per write, unless some advanced form of write management is employed.
  3. Uniform latency requirements – certain systems will exhibit latency spikes from time to time, even if they’re SSD-based (sometimes especially if they’re SSD-based).
  4. A lot of writes to the same logical disk area. This, even with autotiering systems or giant caches, still results in tremendous load on a rather limited set of disks (whether they be spinning or SSD).
  5. The storage type used and the amount – different types of media have very different performance characteristics, even within the same family (the performance between SSDs can vary wildly, for example).
  6. CDP tools for local protection – sometimes this can result in 3x the I/O to the back-end for the writes.
  7. Copy on First Write snapshot algorithms with heavy write workloads.
  8. Misalignment.
  9. Heavy use of space efficiency techniques such as compression and deduplication.
  10. Heavy reliance on autotiering (resulting in the use of too few disks and/or too many slow disks in an attempt to save costs).
  11. Insufficient cache with respect to the working set coupled with inefficient cache algorithms, too-large cache block size and poor utilization.
  12. Shallow port queue depths.
  13. Inability to properly deal with different kinds of I/O from more than a few hosts.
  14. Inability to recognize per-stream patterns (for example, multiple parallel table scans in a Database).
  15. Inability to intelligently prefetch data.

 

What you can do to get a solution that will work…

You should work with your storage vendor to figure out, at a minimum, the items in the following list, and, after you’ve done so, go through the sizing with them and see the sizing tools being used in front of you. (You can also refer to this guide).

  1. Applications being used and size of each (and, ideally, performance logs from each app)
  2. Number of servers
  3. Desired backup and replication methods
  4. Random read and write I/O size per app
  5. Sequential read and write I/O size per app
  6. The percentages of read vs write for each app and each I/O type
  7. The working set (amount of data “touched”) per app
  8. Whether features such as thin provisioning, pools, CDP, autotiering, compression, dedupe, snapshots and replication will be utilized, and what overhead they add to the performance
  9. The RAID type (R10 has an impact of 2 I/Os per random write, R5 4 I/Os, R6 6 I/Os – is that being factored?)
  10. The impact of all those things to the overall headroom and performance of the array.

If your vendor is unwilling or unable to do this type of work, or, especially, if they tell you it doesn’t matter and that their box will deliver umpteen billion IOPS – well, at least now you know better :)

D

Technorati Tags: , , , , , , , , , , , ,

 

43 thoughts on “An explanation of IOPS and latency

  1. Mike Riley

    A real-world example of why latency matters. Before coming to NetApp, I worked at a bank card company (later bought by JPMC). For our credit card authorization system, the number of IOPS quoted by the vendor were irrelevant. Our application IOPS number was helpful to the storage vendors but latency was king. If the storage latency was >4ms, our fraud and abandonment numbers went through the roof. The storage had to respond in 4ms or less in order for us to run the transaction through our fraud detection systems and respond to the user standing at the cash register before they got frustrated and switched to a different card. Basically, if storage couldn’t respond in time, we had enough lost revenue due to either fraud or abandonment to pay for an entirely new storage farm in about an hour. Nothing personal to the storage vendors but we had them all on speed-dial. You could either do it or you couldn’t. One year we literally had vendors stacked on the loading dock – one was on their way out; one was on their way in; all were on a right-of-return. The business literally depended on latency.

    Reply
  2. sl0n

    Another example where latency, as Mike says, is king: mysql master-slave replication. Not providing to many details here and simplifying – you can do writes in multiple threads on a master, however when the same writes get replicated to a slave they have to be applied sequentially. And therefore it’s a single thread process .. at this point everything what matters really is latency. It’s 8Gb SAN fabric in our case and the numbers itself are very good, eg most of the time our latency is ~ 1ms, however for some db’s it’s “too high” and storage attached slaves are not keeping up with replication.

    Reply
  3. Jay

    Well i understand your oversimplification for the latency, but i can’t quite get the curve to a real storage system.

    I think that if a storage can provide high IOPS, then it automatically can also answer very very fast to a read/write command (and that means good latency).

    NetApp-techies in TR-3808, page 6 write something similar:
    http://www.netapp.com/us/library/technical-reports/tr-3808.html
    “Overall average latencies for all test cases closely tracked the performance differences as measured in
    IOPS between the protocols. This is expected as all test cases were run for the same time duration and,
    in general, higher numbers of IOPS map directly to lower overall average latencies.”

    So in the end, latency depends on high IOPS, doesn’t it? (i mean a specific I/O workload that simulates a specific application, e.g. VMware)

    Reply
    1. Dimitris Post author

      Hi Jay,

      It’s not quite that simple. There are storage systems out there that advertise high IOPS – but the latency is not advertised. If you push the vendor for the number, you may discover the latency for the high IOPS is an unusable high number.

      BTW – there is a relationship between IOPS and latency as you note, but it’s definitely not linear, and it depends on each type of system (some scale more linearly than others). So, a certain system might be able to provide good latency and linear scaling up to 300000 IOPS, then the latency might jump 3x to go to 400000 IOPS.

      If you only knew the latency at the 400000 IOPS point and tried to estimate the latency at 100000 IOPS, your number would be wrong due to the nonlinear scaling.

      So, my point is that using just the IOPS number without the latency is useless in every single case.

      More useful are IOPS vs latency curves like you see in the SPC-1 benchmark, then you know how the system responds as IOPS go up.

      In addition, some systems respond to certain types of IOPS more efficiently than others.

      For example, a system may provide excellent latency at high 512-byte read IOPS, but the latency will take a nosedive when doing a blend of 4K reads and writes.

      D

      Reply
      1. Nikolay

        Hi, Dimitris!

        First of all I’d like to thank you for your blog – it’s one of the best on the Internet about storage systems aspects!
        Regarding this particular undoubtedly fundamental and very useful post – one thing is still unclear to me.
        IOPS (as per Wikipedia) stands for Input/Output Operations _Per Second_ so why the relationship between IOPS and latency is no linear, if with more operations per second ( it doesn’t matter how big or radnom/sequential these IOPS are at the moment, let’s say 4K random within single volume and aggregate for example) the system will get more busy overall (CPU, iSCSI/FC ports, internal HD cache an so on) and, therefore, it will mean that latency will increase (like network ping replies time increase when link gets more utilized). Please correct my logic if I’m wrong :)

        Thanks in advance!

        Reply
        1. Dimitris Post author

          Hi Nikolay, and thanks for the kind words.

          Latency is relatively linear up to a point, then it goes up very quickly.

          Why?

          Several reasons. The easiest way to explain it is that not everything in the storage system runs out of steam at the same time.

          There is of course a ton of proof that latency doesn’t increase linearly with IOPS. Look at pretty much every SPC-1 submission and you see the same thing.

          Unless I misunderstod the question.

          Thx

          D

          Reply
  4. Pingback: FCoE ‘versus’ iSCSI – The Mystery is Solved - WKSB Solutions

  5. Pingback: IO Blazing Datastore Performance with Fusion-io « Long White Virtual Clouds

  6. Pingback: So now it is OK to sell systems using “Raw IOPS”??? | Recovery Monkey

  7. Pingback: Series of Posts Highlight Importance of Evaluating Storage Performance in Context with your Applications « High Performance Solid-State (SSD) Storage Array Blog | The I/O Storm

  8. Pingback: Latency Everywhere | ClusterDesign.org

  9. Pingback: Are you doing a disservice to your company with RFPs? | Recovery Monkey

  10. Some Guy Named Jay

    As a SAN filled with spinning disks fills up with data, won’t the latency increase as the sectors being written are further from the faster outer part of the disk? How does this play into the performance calculation?

    Reply
    1. Dimitris Post author

      Seek time will increase, yes. But it depends on the storage system being used. For example, NetApp’s Data ONTAP has an option (wafl.optimize_write_once off) that will randomize writes to the disk surface in order to provide geometric fairness for all workloads (the default is on but for most systems I prefer off).

      With other systems, you can’t really control that, so yes, as the system fills up, it will get slower (which is also ONTAP’s default mode of operation).

      NetApp turns the option off for the SPC-1 tests – nobody can accuse us of disk short-stroking :) (well, they can and they do, but they’re easily proven wrong after I point out that the option was off for the test).

      D

      Reply
      1. Some Guy Named Jay

        Interesting. Have you tested this system with a Flash Pool yet? Will you? What would you expect the results to do?

        Reply
  11. Gen

    Thanks! this is a great set of knowledge.
    I just don’t understand one thing.
    IOPS is a number of operations done in one second.
    Latency is a time spent on doing one operation. Isn’t it?
    So let’s say I have 5ms latency. Doesn’t it indicate that in one second I would do 1000/ms / 5ms = 200 operations?
    I know that this is not right because my storage just did 2984 IOs in one second with latency 5ms (cache hit+miss). Where this extra 2,4k ios are coming from!? One second is one second right?

    Reply
    1. Dimitris Post author

      There is a lot of parallelism in disk arrays, plus a lot of cache-related cleverness, plus other methods for increasing IOPS with low latency. For instance, NetApp systems running ONTAP code will convert random writes to sequential ones, making writes more efficient.

      For a single-threaded workload on a single disk with zero cache, your calculation would be more right.

      Reply
      1. Gen

        Thanks! Now it makes all sense to me. We have over here many threads (10 hosts + nSeries head) so we must have a lot of parallelism.
        Thanks again!

        Reply
      2. Peter Jacobs

        That’s only true on a fresh system. Once the system gets past a certain percentage it no longer does applies. Once you get above 50% you’re sure to start see latency climb up since it’s harder for the head of the drive to find an open space.

        Reply
  12. nm

    Hi Recovery Monkey,
    In the start of your post you mention 15,000 IOPs then show a screen shot from an awr report that shows physical reads at 14,676 per second. Correct me if I am wrong I’ve always thought that IOPs from an AWR were calculated using the following statistics from awr:

    physical read total IO requests + physical write total IO requests (assuming 10.2 or above)

    It’s my understanding that the load profile that shows physical reads – shows them in units of o/s blocks.? From the reference manual of 11.2:
    .
    “Total number of data blocks read from disk. This value can be greater than the value of “physical reads direct” plus “physical reads cache” as reads into process private buffers also included in this statistic.”
    .
    I’m always confused on this cause I see people using different statistics for this?

    Reply
    1. Dimitris Post author

      Hi NM,

      Physical reads in AWR is a measure of the number of Oracle blocks read using read(2) system calls.

      For typical random-access workloads each one of those is a read operation. If there are a lot of sequential read requests then many blocks may be read in one read operation.

      So that statistic is an upper bound on the number of read IOPS.

      The physical reads and writes only account for physical I/O related to data blocks, and does not include redo and archiving activity, nor any “non-database” I/O activity like RMAN, LOB, etc…

      Since those physical reads could be of varying sizes, you also have to check the Instance stats for “physical read total bytes”. That way you can tie the read operations per second. Just look at the middle column since that’s the per second number.

      D

      Reply
  13. Rod

    Thanks for a most informative and entertaining post. Cutting through the marketing hype and buzz word jungle like a sword of clarity :-)

    Reply
  14. Niall Byrne

    Great article. Cut’s through the BS I read in a lot of technical articles. Explains IOPs and Latency beautifully and in a clear manner. Highly recommended.

    Reply
  15. Pingback: How to decipher EMC’s new VNX pre-announcement and look behind the marketing. | Recovery Monkey

  16. Adrian

    LOve this article, well done and thank you on behalf of many people who wanted to understand the depths of iops and latency.

    Reply
  17. Mahmoud

    Thanks so much for this very useful article, but I wish you have introduced the difference between IOPS and MB/s…is it possible to give a brief explanation, or just recommend any other article about it….Thanks again :)

    Reply
  18. Simon

    WOW! Thank you, Dimitris!

    This is the greatest article I have ever read regarding storage I/O etc.
    As a DBA, I have seen tons of event 833 error messages recorded by MS SQL Server on Windows, and some ocfs2 fencing of 1 minute latency delay on Linux boxes.
    Those problems are apparently caused by disk latency, aren’t they?

    Reply
  19. Pingback: VDI, storage and the IOPS that come with it. Part 1 & 2. | Bas van Kaam

  20. Pingback: Storage performance, IOPS and latency | rsr72

  21. Sam @ Xcentech

    Thank you for the post. I’m glad that you took the time to breakdown what can quickly become a daunting subject. I look forward to seeing your other posts once I get the time to read them! :D

    —Sam

    Reply
  22. Guido

    This makes a lot of sense. I’m not a storage guy, but whenever the storage dudes mentioned IOPs, I always thought that they needed to add more info to this term to be meaningful. It’s kind of like using RPM to measure how fast your car is moving. Well, it depends….

    Reply
  23. crookedm

    Storage guys quoting IOPS oh dear, latency is king. Exchange completion times even more so. Sometimes I’m convinced people in application teams think arrays are solely dedicated to their apps and storage colleagues tell them IOPS to make them not blame the storage. Let’s not get started on queue depths either!

    Reply
  24. Pingback: VDI, storage and the IOPS that come with it. Part 1 & 2.

  25. Sreenath Gupta

    Hello My friend, i think i reached you atlast, i am in very big trouble with my new Dell R820 server with Hyper-v 2012 installed on it. I am facing high latency issues with all the vm’s as well with the host machine, for few hours server works good and after that i am facing the latency issue again and at the time of latency, i am restarting the server and problem will disappear, and again it come back after some time. I have raised ticket with Dell and Microsoft for the same and both of them could not solve my issue. We are using the server for virtuliazation and the hardware configuration is 64 GB RAM/xeon processor quad core 2 sockets total 32 threads/1 x 3 SAS 6 gb 7k RPM. Kindly suggest me if is this due to the HDD issue.

    Reply
    1. Peter Jacobs

      Absolutely a Disk issue. Depending on the RPM of the drive, At most it would give you 450 IOPS at most.

      Reply

Leave a comment for posterity...