This blog is owned and moderated by Dimitris Krekoukias. The purpose of the blog is to provide commentary on data management, storage, performance, backup, recovery and archiving – and beyond.

Any views posted here are strictly my own and not necessarily my employer’s. This blog is paid for and managed by me personally and not sponsored through corporate entities.

What’s more – I will not accept any requests for advertising or money to write “special” posts to promote certain products.

If you want to email me directly my address is dikrek AT gmail. If you want my corporate email, you can probably figure that out, too.


8 Replies to “About”

  1. Hello Dimitris,

    Sorry didn’t find an email address on your blog thus unrelated comment.

    I am a storage blogger and attending the SNW in San Diego. I am contacting fellow storage bloggers to inquire about their plans for SNW. If you are attending SNW, will you be interested in attending a gathering of fellow storage bloggers? Any thoughts on any such gathering are most welcome.

    I recently wrote a blog post on this topic
    http://andirog.blogspot.com/2007/04/say-hello-at-snw.html. I will appreciate any help in spreading the word to fellow bloggers and storage professionals.



    1. I will say, since i wrote this post i’ve been running over 20 poiductorn VM’s on NFS on my NetApps and another 20 on fiberchannel. A Couple things here, NFS runs great and the de-duplication has been very useful. It’s also been nice for running systems off and ESX Server that is not within fiber distance to my SAN.A couple of cons are:1. NFS doesn’t transfer as quick on my non-disruptive filer upgrades. It works, the systems stall for a short period during the cluster failover process and the VM’s are happy… but it does give the system a short period (maybe 10 – 30 seconds) of downtime. Amazingly, our streaming media VM only stalled on the non-buffered video.2. It’s easy to over subscribe your NFS Share if your using De-Dup and you provision too many systems too quickly. We filled up a volume with systems and de-dup was saving over 80%… but once the changes came to the individual VM’s we filled up the volume and eventually caused VM’s to not be able to start. I’d recommend only using up to 60% of the de-dup savings as it will eventually catch up to you.

  2. Greetings,

    Nice to see your techblog. Has lots of interesting things to dive into.

    I am experiencing an issue on NetApp FAS6240 and a guy just told me to ask you about it.

    “NETAPP FAS6240 – Converting luns from Thin to Thick.

    I am converting luns from Thin to Thick in my NetApp FAS6240 and the aggregate is now running out of space. There are plenty of luns remaining in Thin provisioning. What is comming to my mind is once luns in Thin provisioning start to increase and claim for more capacity, it will go out of space as well as the volume such as the aggregate. I am not using such kind of snapshots and I am trying to analize my options now.

    Once thing that comes to my mind is, once I am converting all to Thick and I never use snapshots, is Fractional Reseve = 100% really necessary?

    Any ideas?”


    1. Nuno, check this post: http://recoverymonkey.org/2010/05/07/netapp-usable-space-beyond-the-fud/

      Thin provisioning, like any other space efficiency tool, needs to be used wisely.

      Typically I recommend turning thin provisioning on without overprovisioning, and also turning on auto volume expansion.

      Plus plenty of monitoring.

      This will provide you with elastic storage.

      But just making everything thick and the system running out of space simply means you overprovisioned.


  3. Interesting Pages Dimitris … very good and put a smile on my face. I have been an OS / NetBackup Specialist for years now, in Athens and for the last 6 yrs in the UK.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.