Comments for Recovery Monkey http://recoverymonkey.org Musings on backups, storage, tuning and more Thu, 25 Sep 2014 21:45:04 +0000 hourly 1 Comment on When competitors try too hard and miss the point – part two by Dimitris http://recoverymonkey.org/2014/09/18/when-competitors-try-too-hard-and-miss-the-point-part-two/#comment-8085 Dimitris Thu, 25 Sep 2014 21:45:04 +0000 http://recoverymonkey.org/?p=629#comment-8085 Hi Nick,

Actually you can tell how I/O will be split if you are doing I/O to an aggregate with a uniform I/O size.

My point when customers ask me this is:

Who cares exactly how the I/Os are precisely laid down?

We virtualize that part and overall system performance is a function of controller speed, spindle speed and count, and cache size, when faced with a specific blend of I/O.

Then prefetching also plays a role, depending on the workload.

So when we size systems we use an elaborate tool (web-based – called SPM, ask your local SE if you want to see it) that simulates overall I/O – you can be generic and feed it an overall I/O blend (incl random vs sequential, how much of the data is “hot”, I/O sizes) or you can be specific and dedicate I/O profiles to certain pools (maybe have a pool for large block sequential I/O, for example).

If you want to do all your own calculations, it’s far easier to do them for a simpler architecture like NetApp E-Series. That one can be configured with “old school” RAID as well as with the fancy DDP (which makes it harder to figure stuff out – the more intelligent the back end, the harder it is in general to manually calculate stuff for it).

I would typically suggest a couple of different sizings based on I/O profiles (maybe day and/or time-dependent), and pick the one that needs the biggest configuration.

Hope this helps.

Thx

D

]]>
Comment on When competitors try too hard and miss the point – part two by Nick http://recoverymonkey.org/2014/09/18/when-competitors-try-too-hard-and-miss-the-point-part-two/#comment-8060 Nick Wed, 24 Sep 2014 07:11:01 +0000 http://recoverymonkey.org/?p=629#comment-8060 Hi gents!

Another great post with the information which (in my experience) you just cannot get from 90% of sales managers even in NetApp distributor companies (to say nothing about competitors). And this is pretty sad…

As far as I understand now, this generally accepted approach (which is widely used on the training courses and vendor-comparative presentations) with the table for different HDD types (FC, SAS, SATA, SSD) and thier respective IOPS numbers is completely unacceptable when we talk about any kind of storage systems (even DAS)? There are so many factors which can affect these “per drive IOPS” numbers like mentioned “write tetrises and NVMEM write coalescence” and different kinds of flash caches which make using such line of reasoning for performance calculations just wrong.
As I can clearly see from these two-part post, NetApp has really almost the cheapest and the highest IOPS number per drive (with its very rich feature and high availability options set of course) with the assumption that you have chosen the system which is adequate to your application its load profile. It seems to me now that this is one of the most important parts of sale if the customer really wants to spend his money optimally.

However, there is one question regarding this post which is still unclear to me: am I correctly understood that the only conclusion we can make from the various mentioned real NetApp systems’ read performance examples is that we can *never* say how physical read IO will be splitted or organized by the controller when it will hit HDDs even if we know exactly what block size our application uses? So, in our argumentation in the face of our customer we should never roll down to such IOPs comparisons because we just cannot predict this numbers in real life random workloads?
One good example from your another great colleague (John Martin) with excellent blog which guide me to such thoughts (http://storagewithoutborders.wordpress.com/2010/07/19/data-storage-for-vdi-part-5-raid-dp-wafl-the-ultimate-write-accelerator):
“Suppose, for example, that in writing data to a 32-block long region on a single disk in the RAID group, we find that there are 4 blocks already allocated that cannot be overwritten. First, we read those in, this will likely involve fewer than 4 reads, even if the data is not contiguous. We will issue some smaller number of reads (perhaps only 1) to pick up up the blocks we need and the blocks in between, and then discard the blocks in between (called dummy reads). When we go to write the data back out, we’ll send all 28 (32-4) blocks down as a single write operation, along with a skip-mask that tells the disk which blocks to skip over. Thus we will send at most 5 operations (1 write + 4 reads) to this disk, and perhaps as few as 2. The parity reads will almost certainly combine, as almost any stripe that has an already allocated block will cause us to read parity. So suppose we have to do a write to an area that is 25% allocated. We will write .75 * 14 * 32 blocks, or 336 blocks. The writes will be performed in 16 operations (1 for each data disk, 1 for each parity). On each parity we’ll issue 1 read. There are expected to be 8 blocks read from each disk, but with dummy reads we expect substantial combining, so lets assume we issue 4 reads per disk (which is very conservative). There are 4 * 14 + 2 read operations, or 58 read operations. Thus we expect to write 336 blocks in 58+16= 74 disk operations. “

]]>
Comment on When competitors try too hard and miss the point – part two by AK http://recoverymonkey.org/2014/09/18/when-competitors-try-too-hard-and-miss-the-point-part-two/#comment-8033 AK Mon, 22 Sep 2014 18:11:19 +0000 http://recoverymonkey.org/?p=629#comment-8033 Nice one, D

]]>
Comment on When competitors try too hard and miss the point by phillmoroz http://recoverymonkey.org/2014/09/09/when-competitors-try-too-hard-and-miss-the-point/#comment-7954 phillmoroz Fri, 19 Sep 2014 08:58:04 +0000 http://recoverymonkey.org/?p=611#comment-7954 lol, dongs

]]>
Comment on Netbackup best practices for ridiculously busy environments (but not exclusively). by Krista http://recoverymonkey.org/2007/05/22/netbackup-best-practices-for-ridiculously-busy-environments-but-not-exclusively/#comment-7939 Krista Thu, 18 Sep 2014 18:36:50 +0000 http://recoverymonkey.net/wordpress/?p=23#comment-7939 Finally i quit my regular job, now i earn decent money on-line you should try too, just search in google – blackhand
roulette system

]]>
Comment on When competitors try too hard and miss the point by When competitors try too hard and miss the point – part two | Recovery Monkey http://recoverymonkey.org/2014/09/09/when-competitors-try-too-hard-and-miss-the-point/#comment-7937 When competitors try too hard and miss the point – part two | Recovery Monkey Thu, 18 Sep 2014 17:40:03 +0000 http://recoverymonkey.org/?p=611#comment-7937 [...] This will be another FUD-busting post in the two-part series (first part here). [...]

]]>
Comment on Netbackup best practices for ridiculously busy environments (but not exclusively). by Michał http://recoverymonkey.org/2007/05/22/netbackup-best-practices-for-ridiculously-busy-environments-but-not-exclusively/#comment-7639 Michał Wed, 27 Aug 2014 08:23:20 +0000 http://recoverymonkey.net/wordpress/?p=23#comment-7639 Good one :) I feel really well after reading it. Still few things to scare Murphy away.

]]>
Comment on EMC’s VNX2 forklift: The importance of hardware re-use, slowing down obsolescence, and maximizing your investment by Matthew Kurowski http://recoverymonkey.org/2013/09/24/emcs-vnx2-forklift-the-importance-of-hardware-re-use-slowing-down-obsolescence-and-maximizing-your-investment/#comment-7506 Matthew Kurowski Mon, 11 Aug 2014 15:17:26 +0000 http://recoverymonkey.org/?p=604#comment-7506 (Especially since we know each other, I occasionally flag some of your posts for later follow up in case things change and I know your site is a great resource for many. I try not to resurrect posts unless it’s truly pertinent and if the data isn’t broadly known.)

Of course investment protection is a huge thing. On that line though, it is pertinent to address the scenario you listed regarding the “one month old” purchase. Trust me that I used similar scenarios when portraying risks with any vendor that isn’t NetApp or a ZFS based open system. I’ve been part of NetApp deals where NetApp probably won just off that point alone!

While NetApp certainly has a longer record of supporting older FAS technology, with the VNX/VNXe at least EMC has recently delivered a platform showing in earnest that they are building to the same in recent/current generations/builds with the likely continued support in future platform revisions. While the first gen units only have physical support (with the need to re-lay the data after movement), the current generation of hardware and its operating environment supports a more seamless path.

We’ll have to wait and see.

So I wasn’t implying to correct your post as it was written at the time regarding EMC but was updating the material since I know you like the most current data (competitive or not) and since other people will probably land here researching the topic.

Always the best – cheers!

]]>
Comment on EMC’s VNX2 forklift: The importance of hardware re-use, slowing down obsolescence, and maximizing your investment by Dimitris http://recoverymonkey.org/2013/09/24/emcs-vnx2-forklift-the-importance-of-hardware-re-use-slowing-down-obsolescence-and-maximizing-your-investment/#comment-7488 Dimitris Sun, 10 Aug 2014 02:36:05 +0000 http://recoverymonkey.org/?p=604#comment-7488 Hi Matt,

AFAIK (and I checked), at the time of writing, my article was accurate and EMC did not allow re-using DAEs from the VNX with VN2.

Now I guess they do per your comment (but you still need to wipe the data).

My point is: EMC (and most other manufacturers) make forklift upgrades a necessity. CX -> VNX – impossible.

At least with the NetApp FAS systems we allowed the old FC shelves to be used with newer controllers and firmware.

Investment protection isn’t a small thing.

Thx

D

]]>
Comment on EMC’s VNX2 forklift: The importance of hardware re-use, slowing down obsolescence, and maximizing your investment by Matthew Kurowski http://recoverymonkey.org/2013/09/24/emcs-vnx2-forklift-the-importance-of-hardware-re-use-slowing-down-obsolescence-and-maximizing-your-investment/#comment-7487 Matthew Kurowski Sat, 09 Aug 2014 17:39:45 +0000 http://recoverymonkey.org/?p=604#comment-7487 Dimitris,

As a brief follow up to this post (for posterity), I confirmed that drives and DAEs from a VNX can be re-used for VNX Rockies. The ported drives may require a firmware update to handle MCR functionality but I haven’t been able to confirm that from engineering/support notes.

Also while the new VNX line also supports removal the physical removal of an entire RAID Group from the array with subsequent reinsertion and recognition, that does not apply with the VNX generation porting. That is, the DAEs and drives will work but the data will be lost. However, once the drives are successfully running under the new MCx then they gain that functionality. I’d like to test that between VNX Rockies systems but I need two lab systems to do that. If you’re interested I’ll let you know once I manage that.

Cheers

]]>
Comment on An explanation of IOPS and latency by The IOPS cheat sheet ! http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/#comment-7481 The IOPS cheat sheet ! Tue, 29 Jul 2014 08:42:15 +0000 http://recoverymonkey.org/?p=488#comment-7481 [...] http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/ [...]

]]>
Comment on An explanation of IOPS and latency by Peter Jacobs http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/#comment-7472 Peter Jacobs Fri, 11 Jul 2014 03:14:35 +0000 http://recoverymonkey.org/?p=488#comment-7472 I missed the RPM. I would give you 210 IOPS at most :-) .

]]>
Comment on An explanation of IOPS and latency by Peter Jacobs http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/#comment-7471 Peter Jacobs Fri, 11 Jul 2014 03:13:34 +0000 http://recoverymonkey.org/?p=488#comment-7471 Absolutely a Disk issue. Depending on the RPM of the drive, At most it would give you 450 IOPS at most.

]]>
Comment on An explanation of IOPS and latency by Peter Jacobs http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/#comment-7470 Peter Jacobs Fri, 11 Jul 2014 02:53:38 +0000 http://recoverymonkey.org/?p=488#comment-7470 That’s only true on a fresh system. Once the system gets past a certain percentage it no longer does applies. Once you get above 50% you’re sure to start see latency climb up since it’s harder for the head of the drive to find an open space.

]]>
Comment on An explanation of IOPS and latency by Sreenath Gupta http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/#comment-7468 Sreenath Gupta Wed, 09 Jul 2014 10:55:25 +0000 http://recoverymonkey.org/?p=488#comment-7468 Hello My friend, i think i reached you atlast, i am in very big trouble with my new Dell R820 server with Hyper-v 2012 installed on it. I am facing high latency issues with all the vm’s as well with the host machine, for few hours server works good and after that i am facing the latency issue again and at the time of latency, i am restarting the server and problem will disappear, and again it come back after some time. I have raised ticket with Dell and Microsoft for the same and both of them could not solve my issue. We are using the server for virtuliazation and the hardware configuration is 64 GB RAM/xeon processor quad core 2 sockets total 32 threads/1 x 3 SAS 6 gb 7k RPM. Kindly suggest me if is this due to the HDD issue.

]]>
Comment on An explanation of IOPS and latency by VDI, storage and the IOPS that come with it. Part 1 & 2. http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/#comment-7467 VDI, storage and the IOPS that come with it. Part 1 & 2. Tue, 08 Jul 2014 12:40:47 +0000 http://recoverymonkey.org/?p=488#comment-7467 [...] comes with it (how long does it take for a single I/O request to get processed). As stated on this Blog (make sure to check it out, take your time and read it thoroughly, it’s one of the best storage [...]

]]>
Comment on An explanation of IOPS and latency by crookedm http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/#comment-7459 crookedm Thu, 05 Jun 2014 21:23:24 +0000 http://recoverymonkey.org/?p=488#comment-7459 Storage guys quoting IOPS oh dear, latency is king. Exchange completion times even more so. Sometimes I’m convinced people in application teams think arrays are solely dedicated to their apps and storage colleagues tell them IOPS to make them not blame the storage. Let’s not get started on queue depths either!

]]>
Comment on Massive benchmark comparison between Windows XP, Vista and 2008 Server, 32- and 64-bit by Klaus http://recoverymonkey.org/2008/06/16/massive-benchmark-comparison-between-windows-xp-vista-and-2008-server-32-and-64-bit/#comment-7458 Klaus Wed, 28 May 2014 02:41:00 +0000 http://recoverymonkey.net/wordpress/?p=72#comment-7458 You can definitely seе your enthusiasm іn the woirk you write.
Thee sector hopes foг morе passionate writers sսch as yoս whօ аre not afraid
too mention how theʏ believe. All the time follow ƴour heart.

]]>
Comment on Are you doing a disservice to your company with RFPs? by Rick http://recoverymonkey.org/2012/12/21/are-you-doing-a-disservice-to-your-company-with-rfps/#comment-7457 Rick Fri, 16 May 2014 19:46:27 +0000 http://recoverymonkey.org/?p=568#comment-7457 Haven’t seen a blog lately? All’s well?

]]>
Comment on An explanation of IOPS and latency by Guido http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/#comment-7455 Guido Fri, 09 May 2014 21:10:07 +0000 http://recoverymonkey.org/?p=488#comment-7455 This makes a lot of sense. I’m not a storage guy, but whenever the storage dudes mentioned IOPs, I always thought that they needed to add more info to this term to be meaningful. It’s kind of like using RPM to measure how fast your car is moving. Well, it depends….

]]>