ZFS in OSX

Not amazing news but an official announcement nonetheless: Saw this (www.macnn.com/articles/07/06/06/zfs.in.leopard/) and I couldn’t resist posting. This means a few things:

  1. Sun figured out how to make ZFS bootable (at least on OSX)
  2. Someone figured out how to deal with ZFS and resource forks (I can’t believe they are willing to break compatibility with so much software otherwise).

Now I just need a Mac so I can run some benchmarks before and after. I have some buddies that might oblige… finally the Macs get a decent FS.

Now if only Apple could lose the silly Mach legacy, it’s a common misconception that the kernel in OSX is FreeBSD – it ain’t. Run lmbench (www.bitmover.com/lmbench/) on different platforms and compare results such as context switching, thread creation and whatnot. Then you’ll see why OSX can’t always make a decent server OS.

D

Data Domain Update

I’m not known for retractions and I’m not posting one. I did however check out the new DD boxes and the really big ones are far more capable than the old ones.

So, the techies (hats off for enduring a half hour with me) explained to me a few things:

  1. The smallest block is 4K
  2. The highest possible performance for the biggest box is 200MB/s
  3. The biggest box can do a bit over 30TB raw
  4. They scrub the disk continuously so it’s effectively defragged (see below for caveat) – they did admit performance totally sucks over time if you don’t do it (finally vindicated!)

This is good news, since it’s obviously far bigger than the old ones.

Some issues though (based on what the techies told me):

  1. It scrubs the disk by virtue of NBU deleting the old images, then it knows what to get rid of. If your retentions are long then you will have performance problems. They suggested just dumping it all to tape and starting afresh once in a while. Which just confirms my suspicions on how the stuff truly works.
  2. Each “controller” is really a separate box. The 16 controller limit does not mean it’s a larger appliance, it’s the limit of the management software.
  3. Ergo, each controller can be a separate VTL or separate NFS mount. You cannot aggregate all your controllers in one large VTL. This sucks since if you need to do backups at 1GB/s or so, you’ll need at least 5-6 boxes, and you will have to define a separate library and drives per box. If you do NFS, you need to define 1-2 shares per box. This is a management nightmare. Make it all a single library! Copan has the same issue. I don’t know how they can do it though based on their architecture.

So, it looks to me like it may be a fit for some people, though I have no idea about the price points. If you want performance then you’ll need a ton of the boxes, and you’ll need to spend time configuring them. If 10 maxed-out boxes cost the same (or, worse, more) than a big EMC DL4400 (that can do 2.2GB/s) then it’s not an easy sell. Especially since EMC will be adding dedupe to their VTL – plus, you won’t have to define a bunch of separate libraries. Will EMC’s dedupe be similar? No idea, but if it doesn’t impact performance then it’s pretty compelling.

Thoughts? You know the drill.

D

At EMC World

Currently attending EMC World. The first day bored me to tears, I hope the rest will be more exciting (though it utterly depends on the presenters). Some of the material is too introductory, even if one attends the advanced sessions they’re not that advanced.

More to follow.

D

Another windows tuning I forgot to mention

I use my laptop so much that I sometimes forget about some server-type tunings.

I resuscitated my hot-rod AMD box – it’s a grossly overclocked monster but only has 1GB RAM (since it’s hard to find that kind of fast RAM in bigger sizes, and using 4 sticks prohibits me from overclocking it so much). Let’s just say the CPU is running a full GHz faster than stock, and with air, not water or peltier coolers.

Anyway, since it only has 1GB RAM and I use it for Photoshop and games, I can’t really use something like Supercache or Uptempo on it.

So I tried O&O Software’s Clevercache. By far not as good as the other 2 products – however, it does a decent job of automatically managing cache so you always have enough free RAM.

Then I tried the DisablePagingExecutive registry tweak – not that obscure, tons of references around.

BTW, there is a way to stop postmark from using caching – set buffering false is the command. However, I want to see the benchmark run on a system that would run normally, not measure the raw speed of my disks. Nobody cares about that anyway, especially in the big leagues (unless the config is truly moronic, of course). Cache is everything. But I digress.

So – postmark once more.

Stock:

Time:
177 seconds total
144 seconds of transactions (138 per second)

Files:
20092 created (113 per second)
Creation alone: 10000 files (333 per second)
Mixed with transactions: 10092 files (70 per second)
9935 read (68 per second)
10064 appended (69 per second)
20092 deleted (113 per second)
Deletion alone: 10184 files (3394 per second)
Mixed with transactions: 9908 files (68 per second)

Data:
548.25 megabytes read (3.10 megabytes per second)
1158.00 megabytes written (6.54 megabytes per second)

after tuning as server with the background process, large cache and fsutil as described previously:

Time:
107 seconds total
85 seconds of transactions (235 per second)

Files:
20092 created (187 per second)
Creation alone: 10000 files (526 per second)
Mixed with transactions: 10092 files (118 per second)
9935 read (116 per second)
10064 appended (118 per second)
20092 deleted (187 per second)
Deletion alone: 10184 files (3394 per second)
Mixed with transactions: 9908 files (116 per second)

Data:
548.25 megabytes read (5.12 megabytes per second)
1158.00 megabytes written (10.82 megabytes per second)

with clevercache:

Time:
97 seconds total
71 seconds of transactions (281 per second)

Files:
20092 created (207 per second)
Creation alone: 10000 files (454 per second)
Mixed with transactions: 10092 files (142 per second)
9935 read (139 per second)
10064 appended (141 per second)
20092 deleted (207 per second)
Deletion alone: 10184 files (2546 per second)
Mixed with transactions: 9908 files (139 per second)

Data:
548.25 megabytes read (5.65 megabytes per second)
1158.00 megabytes written (11.94 megabytes per second)

Hell, I guess I might get Clevercache for this system – sped it up a bit and manages memory consumption.

But look at this:

All the above plus using the DisablePagingExecutive registry tweak: BOOYA!

Time:
45 seconds total
28 seconds of transactions (714 per second)

Files:
20092 created (446 per second)
Creation alone: 10000 files (1111 per second)
Mixed with transactions: 10092 files (360 per second)
9935 read (354 per second)
10064 appended (359 per second)
20092 deleted (446 per second)
Deletion alone: 10184 files (1273 per second)
Mixed with transactions: 9908 files (353 per second)

Data:
548.25 megabytes read (12.18 megabytes per second)
1158.00 megabytes written (25.73 megabytes per second)

I guess the box is staying this way.

More info on the registry tweak:

http://technet2.microsoft.com/windowsserver/en/library/3d3b3c16-c901-46de-8485-166a819af3ad1033.mspx?mfr=true

In a nutshell, it disables the paging of kernel and driver code, so it’s always memory-resident. Makes sense in some cases, as you can see above 🙂

It’s so unusual that it gave me that much of a boost, though. I’d tried it a long time ago and it wasn’t quite as dramatic, but that was on a much older system.

One would argue that postmark lied but using a stopwatch and just eyeballing the sucker it was way quicker doing the transactions.

On servers I just didn’t normally set it because I figured they had enough RAM. Maybe I should start doing it on boxes that do a lot of transactional I/O. Damn, I need to try this with Supercache.

Obviously, your mileage may vary.

WARNING: DO NOT DO THIS ON ANY MACHINE THAT NEEDS TO SUSPEND!!!

Which is why I just didn’t do it on the laptop.

D

On deduplication and Data Domain appliances

One subject I keep hearing about is deduplication. The idea being that you save a ton of space since a lot of your computers have identical data.
One way to do it is with an appliance-based solution such as Data Domain. Effectively, they put a little server and a cheap-but-not-cheerful, non-expandable 6TB RAID together, then charge a lot for it, claiming it can hold 90TB or whatever. Use many of them to scale.

The technology chops up incoming files into pieces. Then, the server calculates a unique numeric ID using a hash algorithm.

The ID is then associated with the block and both are stored.

If the ID of another block matches one already stored, the new block is NOT stored, but it’s ID is, as is the association with the rest of the blocks in the file (so that deleting a file won’t adversely affect common blocks with other fles).

This is what allows dedup technologies to store a lot of data.

Now, why it depends how much you can store:

If you’re backing up many different unique files (like images), there will be almost no similarity, so everything will be backed up.
If you’re backing up 1000 identical windows servers (including the windows directory) then there WILL be a lot of similarity, and great efficiencies.

Now the drawbacks (and why I never bought it):

The thing relies on a weak server and a small database. As you’re backing up more and more, there will be millions (maybe billions) of IDs in the database (remember, a single file may have multiple IDs).

Imagine you have 2 billion entries.

Imagine you’re trying to back up someone’s 1GB PST, or other large file, that stays mostly the same over time (ideal dedup scenario). The file gets chopped up in, say, 100 blocks.

Each block has it’s ID calculated (CPU-intensive).

Then, EACH ID has to be compared with the ENTIRE database to determine whether there’s a match or not.

This can take a while, depending on what search/sort/store algorithms they use.

I asked data domain about this and all they kept telling me was “try it, we can’t predict your performance”. I asked them whether they had even tested the box to see what the limits were, and they hadn’t. Hmmm.

I did find out that, at best, the thing works at 50MB/s (slower than an LTO3 tape drive), unless you use tons of them.

Now, imagine you’re trying to RECOVER your 1GB PST.

Say you try to recover from a “full” backup on the data domain, but that file has been living in it for a year, with the new blocks being added to it.

When requesting the file, the data domain box has to synthesize the file (remember, even the “full” doesn’t include the whole file). It will read the IDs needed to recreate it and put the blocks together so it can present the final file, as it should have looked.

This is CPU- and disk-intensive. Takes a while.

The whole point of doing backups to disk is to back up and restore faster and more reliably. If you’re slowing things down in order to compress your disk as much as possible, you’re doing yourself a disservice.

Don’t get me wrong, dedup tech has it’s place, but I just don’t like the appliance model for performance and scalability reasons.
EMC just purchased Avamar, a dedup company that does the exact same thing but lets you install the software on whatever you want.

There are also Asigra and Evault, both great backup/dedup products that can be installed on ANY server and work with ANY disk, not just the el cheapo quasi-JBOD data domain sells.

So, you can leverage your investment in disk and load the software of a beefy box that will actually work properly.

Another tack would be to use virtual tape – doesn’t do dedup (yet, but it will since EMC bought Avamar and Adic, now Quantum, also acquired another dedup company and will put the stuff in their VTL, you can get the best of both worlds) but it does compression just like real tape.

Plus, even the cheapest EMC virtual tape box works at over 300MB/s.

I sort of detest the “drop at the customer site” model data domain (and a bunch of the smaller storage vendors) use. They expect you to put the box in and if it works OK to make it easier to keep it than send it back.

Most people will keep the first thing they try (unless it fails horrifically), since they don’t want to go through the trouble of testing 5 different products (unless we’re talking about huge companies that have dedicated testing staff).

Let me know what you think…

D