My opinion on the Sun/NetApp altercation: Both companies should be grateful instead of resorting to lawsuits

Since opinions are like you-know-what, and since I’m decidedly anatomically complete in that respect (some, indeed, claim all of me is composed of implied anatomical part, so maybe that’s why I’m so opinionated), I thought I’d throw my $0.2 in the pot and not stay silent. The whole issue irks me quite a bit, actually.

Like my colleague, Rich, and I think most digerati (there’s a nice word whose time came and went, it seems), I have been following the machismo display between Sun and NetApp (see some representative comments from both sides here and here). BTW, I doubt anything will really happen with the lawsuits, and highly doubt even that money will change hands out-of-court to settle this. This is more about chest-thumping than anything else. But, in a nutshell, it seems it all started due to NetApp wanting to buy some STK patents (from before the STK acquisition), Sun not wanting to sell but instead asking for $36m to license the patents, NetApp being upset and telling Sun they infringe their WAFL patent with ZFS, then Sun telling NetApp to stop selling filers. Those guys are all nuts. I may be missing some facts (NetApp is super-cagey about what STK stuff they wanted) but they are all still nuts.

It seems people will try to patent anything these days. But going after people that you think infringed your patents can be pathetic if your story is not airtight and your goals noble – remember SCO?

I do believe in protecting one’s IP in some way – whether the best way is a patent I’m not so sure, there’s always copyright. I’m not as naïve as some open source zealots that think all patents are evil and that all software should be free. I wonder where they work and how they all make their living? Do those guys all work in places that only do open source and just give away stuff? If I develop a piece of truly cool IP that can result in me making money, rest assured I’ll try to capitalize on it.

However, I do believe that the current patent system is flawed. It’s also difficult (I think impossible) to find people technically competent enough to oversee the process. For instance (and, to cut to the chase), I would have denied NetApp the WAFL patent, since

  1. It’s a simple evolution and/or modification of existing block allocation schemes to facilitate writes (more technical info later on)
  2. There were other COW (Copy On Write) filesystems prior to NetApp, such as LFS and numerous research projects. Specifically,
  3. Daniel Phillips had done most of the COW work prior to NetApp’s patent, but had to abandon work on the tux2 filesystem due to fear of patent laws (see here). He didn’t file a patent first, since nobody that does open source development is thus inclined.

     

But where do you draw the line on what’s truly new and patentable? And what if enforcing a patent is detrimental to the common good? Should Xerox have patented the mouse? It was totally new back then. What if they’d enforced the patent and told Apple and later Microsoft that they are not allowed, no matter what, to use a mouse? Or if HG Wells patented the science fiction novel? If Hoover patented the vacuum cleaner? If RCA patented the television? You get my drift. There would be zero innovation.

I think patenting obvious stuff should just not be allowed. And, if your patent is based on prior art (regardless of whether it’s been patented), it should be summarily denied. If the patent is granted but is then proven after the fact that someone else had figured out the idea first (as in the case of Mr. Phillips), the patent should automatically be invalidated. Complex, no?

Which is why many think that patenting software should not be allowed.

At the end, with some problems, there is only a finite number of solutions (often only one). Researchers may be working simultaneously on the problem. Eventually, only one will be first with a solution. I am opposed to penalizing the other guy simply because he used a similar algorithm to mine (especially when, mathematically, there may be zero other solutions, making every approach to solve the problem produce the same result).

Back to Sun and NetApp. The truth is, I think, pretty simple. While I have enormous respect for both companies (a bit more for Sun, due to their history and my extensive personal experiences), both companies’ major products are based on a tremendous amount of prior art (patented or not, nobody seems to have complained to either company). Truly, they stand on the shoulders of proverbial IT giants. Sun has the PR benefit of having contributed vast amounts of IP to the world, compared to NetApp (though some technologies like NFS and Java have been pretty painful, so it’s a mixed blessing).

NetApp code heavily borrows from Unix, Sun, IBM, Cisco, EMC and many others. For instance, since Data ONTAP (NetApp’s OS) can’t scale beyond 2 boxes, NetApp purchased Spinnaker – SpinOS creates a single namespace that can transcend many nodes (BTW other products such as IBRIX, Exanet and others can do the same thing really well). The current GX OS is bits from the older ONTAP on top of FreeBSD with some SpinOS bits. However, both the older 7G and the newer GX OSes are offered, since 7G does a lot more (SpinOS can be just large-scale NAS – no iSCSI or FC block device targets, even if those targets on a 7G box are just files, but I digress). Of course NetApp wants to move everyone to SpinOS, which explains NetApp’s current craze with NFS everywhere. It’s infectious, now all of a sudden once again everyone wants to use NFS – VMWare, Oracle, senile grannies running compute clusters all over the world. We get it, it’s a shared-namespace, network-based FS, and sure, you can run pretty much anything on it. People have been for decades. How quickly we forget that it really isn’t the best network-based filesystem, and that there was a reason people developed cool alternative technologies such as AFS, Coda, PVFS, the native IBRIX mode, and many others. The new CIFS that’s part of Windows Server 2008 is actually a really decent implementation, but I’ll probably get flamed by the NFS fanbois for saying so.

And how quickly people forget that it was Sun that gave us NFS, warts and all (well, v4.1 ain’t too bad but that’s a collective effort – the wonders of open source). The rather execrable CIFS, BTW, (the other main NetApp “technology”) was not invented by Microsoft but rather by IBM in 1983. IBM and Cisco invented iSCSI. Legato (now owned by EMC) played a fundamental role in developing NDMP. And I can’t even remember who first created versioning filesystems but I fondly remember my VAXes and they used to do that stuff ages before NetApp even existed (not to mention proper manly-man single-system-image clustering, but that’s a story for another day). I’m pretty sure NetApp didn’t develop Fibre Channel, either.

Cue to today: Now everyone can do snapshots, it’s almost de rigeur, and the truly cool do application-aware snaps.

Volume management is standard, too.

Filesystem expansion is everywhere.

Thin provisioning (not a fan but anyway) is becoming more and more prevalent.

iSCSI is everywhere.

So, the real ZFS issues NetApp is complaining about seem to be the “Write Anywhere” and COW parts, since those are really the only true similarities with WAFL. Seriously, like that’s what’s the most important aspect of Sun’s ZFS. Indeed, while very quick for initial writes, a write-anywhere algorithm can lead to horrific fragmentation and continuously-declining performance over time (which is why you have to defrag NetApp filers). It’s just a safe, easy and computationally cheap method for allocating blocks to minimize write time for write-heavy applications such as NFS. Possibly one of the reasons NetApp did it was because in their boxes there are no RAID controllers, there’s just a CPU or two (486’s I believe in the original boxes) that has to do EVERYTHING – RAID calcs, rebuilds, snaps, caching, etc (the back end of all NetApp gear is JBOD). Using WAFL a lot of the inefficiencies in RAID are bypassed, since it will schedule multiple writes in order to fill a RAID stripe. A more elegant approach such as extent-based allocation (like VxFS) would have been too computationally-intensive, especially for writes. Dave and his pals have a good paper on WAFL here, BTW.

Here’s what ZFS is: It was not meant to be a NetApp killer, it’s just a truly modern FS, with few limits, and an amalgam of all the current “cool” technologies and ideas. Snaps, thin provisioning, expansion, volume management, pools, quotas, self-healing, all in a single technology, that’s surprisingly well thought out, and easy to use even from the command line. ZFS is not the raison d’être of the Solaris OS, but merely a feature of it. Plus it does data checksumming with every write, which other filesystems don’t. Your data is exceptionally safe in ZFS. Some test results here. More features here, and it’s easy to see NetApp getting annoyed after reading that page (though they just think COW is a good idea, the other tremendous features are not in NetApp’s WAFL). Not sure if they fixed the read performance issues NetApp has with their implementation, I need to do some testing of my own.

In my opinion, the only reason NetApp became popular is because it trivialized the whole NAS aspect. Made it easy to build decent, clustered NFS/CIFS boxes without the need to know UNIX. If Sun had put a wizard-driven GUI to perform such actions in their boxes 10 years ago, NetApp might not exist today. To date, I think Sun’s management tools are pathetic, no matter how amazingly solid the underlying tech might be. There’s a GUI for ZFS but, again, that’s besides the point. Aside from initial write performance, a NetApp filer is not about WAFL, extending disk pools and whatnot, it’s about all-around ease-of-use and the sheer amount of cool features.

If NetApp wants to sue someone so badly, maybe they need to sue the Openfiler or FreeNAS developers? Or, if they want to go after someone that’s not open source, how about Open-E? That stuff sure looks much more similar to NetApp than anything made by Sun. Really cool, too. Or maybe they need to sue EMC. Those guys sure make some nice, full-featured NAS gear. Among a myriad other solutions…

Suing someone over a filesystem that’s newer and better in almost every single way than yours but uses one common (and unavoidable in the case of COW) design methodology is just plain silly… and, BTW, how did this escape the patent trolls? Another COW implementation?

And if more developers like Daniel Phillips get scared because of patent laws, then innovation will truly be stifled. The whole point of research is that you can reference other people’s ideas so you don’t always have to re-invent the wheel.

NetApp needs to innovate a bit more themselves. They developed a cool technology and have milked it to death, and even made it do things it shouldn’t (like iSCSI and FC targets, the NetApp approach is really unclean but they are trying to force their OS to do everything, whereas companies like EMC go for the more modular approach and are criticized for being “complex”).

I think I’ll stop writing now since it’s getting late. Never was one to save posts for editing later.

D

This has been one of the worst trips ever – because of one of the silliest DR exercises ever

Well, aside from visiting Flames and helping fix a severe customer problem. Those were rewarding. I still haven’t pooped that steak, BTW.

I was supposed to only stay for 1 day in Manhattan, fix the issue, ba da bing. I ended up staying an extra day – had no extra clothes and no time to get anything. Washed my undies on my own and used the hair dryer over a period of hours to dry them. I learned my lesson now and will always have extra stuff with me.

So I try to go back home today and guess what – Air Traffic Control computers had a major glitch (abcnews.go.com/Business/wireStory?id=3259992) that messed up the whole country’s air travel. Thousands of flights delayed and canceled. Mine was canceled, after I spent about 10 hours in the airport. Another 2 hours in the line to simply rebook the flight since they had 3 people trying to serve hordes. And all because, at least according to the report, a system failed and the failover system didn’t have the capacity to sustain the whole load.

So, while I wait in the airport to catch a stand-by flight tomorrow morning, unbathed and frankly looking a bit menacing, I decided to vent a bit. No hotels, no cars.

Maybe this is too much conjecture and if I’m wrong please enlighten me, but let’s enumerate some of the things wrong with this picture:

  1. First things first: While it’s cool to fail over to a completely separate location, typically you want a robust local cluster first so you can fail over to another system in the original location.
  2. If the original location is SO screwed up (meaning that a local cluster has failed, which typically means something really ominous for most places) ONLY THEN do you fail over to another facility altogether.
  3. Last but not least: Whatever facility you fail over to has to have enough capacity (demostrated during tests) to sustain enough load to let operations proceed. Ideally, for critical systems, the loss of any one site should hardly be noticeable.

According to the report none of the aforementioned simple rules were followed. Someone made the decision to fail over to another facility, which promptly caved under the load. A cascade effect ensued.

I mean, seriously: One of the most important computer systems in the country does not have a well-thought-out and -tested DR implementation. Guys, those are rookie mistakes. Like some airports having 1 link to the outside world, or 2 links but with the same provider. Use some common sense!

So, I guess I’ll put that in the list together with using what’s tantamount to unskilled labor securing our airports instead of highly trained and well-paid personnel that’s been screened extremely intensely and actually takes pride in the job. Maybe some of those unskilled people are running the computers, it might be like the Clone Army in Star Wars. A mass of cheap, expendable labor that collectively has the IQ of my left nut (I’m not being overly harsh – my left nut is quite formidable). The armed forces heading the same way isn’t the most reassuring thought, either.

Yes, I’m upset!!!

wallpapers images animal gorilla

D

Data Domain Update

I’m not known for retractions and I’m not posting one. I did however check out the new DD boxes and the really big ones are far more capable than the old ones.

So, the techies (hats off for enduring a half hour with me) explained to me a few things:

  1. The smallest block is 4K
  2. The highest possible performance for the biggest box is 200MB/s
  3. The biggest box can do a bit over 30TB raw
  4. They scrub the disk continuously so it’s effectively defragged (see below for caveat) – they did admit performance totally sucks over time if you don’t do it (finally vindicated!)

This is good news, since it’s obviously far bigger than the old ones.

Some issues though (based on what the techies told me):

  1. It scrubs the disk by virtue of NBU deleting the old images, then it knows what to get rid of. If your retentions are long then you will have performance problems. They suggested just dumping it all to tape and starting afresh once in a while. Which just confirms my suspicions on how the stuff truly works.
  2. Each “controller” is really a separate box. The 16 controller limit does not mean it’s a larger appliance, it’s the limit of the management software.
  3. Ergo, each controller can be a separate VTL or separate NFS mount. You cannot aggregate all your controllers in one large VTL. This sucks since if you need to do backups at 1GB/s or so, you’ll need at least 5-6 boxes, and you will have to define a separate library and drives per box. If you do NFS, you need to define 1-2 shares per box. This is a management nightmare. Make it all a single library! Copan has the same issue. I don’t know how they can do it though based on their architecture.

So, it looks to me like it may be a fit for some people, though I have no idea about the price points. If you want performance then you’ll need a ton of the boxes, and you’ll need to spend time configuring them. If 10 maxed-out boxes cost the same (or, worse, more) than a big EMC DL4400 (that can do 2.2GB/s) then it’s not an easy sell. Especially since EMC will be adding dedupe to their VTL – plus, you won’t have to define a bunch of separate libraries. Will EMC’s dedupe be similar? No idea, but if it doesn’t impact performance then it’s pretty compelling.

Thoughts? You know the drill.

D

Cisco WAAS benchmarks, and WAN optimizers in general

Lately I’ve been dealing with WAN accelerators a lot, with the emphasis on Cisco’s WAAS (some other, smaller players are Riverbed, Juniper, Bluecoat, Tacit/Packeteer and Silverpeak). The premise is simple and compelling: Instead of having all those servers at your edge locations, move your users’ data to the core and make accessing the data feel almost as fast as having it locally, by deploying appliances that act as proxies. At the same time, you will actually decrease the WAN utilization, enabling you to use cheaper pipes, or at least not have to upgrade, where in the past you were planning to anyway.

There are significant other benefits (massive MAPI acceleration, HTTP, ftp, and indeed any TCP-based application will be optimized). Many Microsoft protocols are especially chatty, and the WAN accelerators pretty much remove the chattiness, optimize the TCP connection (automatically resizing Send/Receive windows based on latency, for instance), LZ-compress the data, and to top it all will not transfer data blocks that have already been transferred.

At this point I need to point out that there is a lot of similarity with deduplication technologies – for example, Cisco’s DRE (Data Redundancy Elimination) is, at heart, a dedup algorithm not unlike Avamar’s or Data Domain’s. So, if a Powerpoint file has gone through the DRE cache already, and someone modifies the file and sends it over the WAN again, only the modified parts will really go through. It really works and it’s really fast (and I’m about the most jaded technophile you’re likely to meet).

The reason I’m not opposed to this use of dedup (see previous posts) is that the datasets are kept at a reasonable size. For instance, at the edge you’re typically talking about under 200GB of cache, not several TB. Doing the hash calculations is not as time-consuming with a smaller dataset and, indeed, it’s set up so that the hashes are kept in-memory. You see, the whole point of this appliance is to reduce latency, not increase it with unnecessary calculations. Compare this to the multi-TB deals of the “proper” dedup solutions used for backups…

Indeed, why the hell would you need dedup-based backup solutions if you deploy a WAN accelerator? Chances are there won’t be anything at the edge sites to back up, so the whole argument behind dedup-based backups for remote sites sort of evaporates. Dedup now only makes sense in VTLs, just so you can store a bit more.

On Dedup VTLs: Refreshingly, Quantum doesn’t quote crazy compression ratios – I’ve seen figures of about 9:1 as an average, which is still pretty good (and totally dependent on what kind of data you have). I just cringe when I see the 100:1, 1000:1 or whatever insanity Data Domain typically states. I’m still worried about the effect on restore times, but I digress. See previous posts.

Anyway, back to WAN accelerators. So how do these boxes work? All fairly similarly. Cisco’s, for instance, does 3 main kinds of optimizations: TFO, DRE and LZ. TFO means TCP Flow Optimizations, and takes care of snd/rcv window scaling, enables large initial windows, enables SACK and BIC TCP (the latter 2 help with packet loss).

DRE is the dedup part of the equation, as mentioned before.

LZ is simply LZ compression of data, in addition to everything else mentioned above.

Other vendors may call their features something else, but at the end there aren’t too many ways to do this. It all boils down to:

  1. Who has the best implementation speed-wise
  2. Who is the best administration-wise
  3. Who is the most stable in an enterprise setting
  4. What company has the highest chance of staying alive (like it or not, Cisco destroys the other players here)
  5. What company is committed to the product the most
  6. As a corollary to #5, what company does the most R&D for the product

Since Cisco is, by far, the largest company of any that provide WAN accelerators (indeed, they probably spend more on light bulbs per year than the net worth of the other companies provided), in my opinion they’re the obvious force to be reckoned with, not someone like Riverbed (as cool as Riverbed is, they’re too small, and will either fizzle out or get bought – though Cisco didn’t buy them, which is food for thought. If Riverbed is so great, why would Cisco simply not acquire them?)

Case in point: When Cisco bought Actona (which is the progenitor of the current WAAS product) they only really had the Windows file-caching part shipping (WAFS). It was great for CIFS but not much else. Back then, they were actually lagging compared to the other players when it came to complete application acceleration. Fast forward a mere few months: They now accelerate anything going over TCP, their WAFS portion is still there but it’s even better and more transparent, the product works with WCCP and inline cards (making deployment at the low-end easy) and is now significantly faster than the competitors. Helps to have deep pockets.

For an enterprise, here are the main benefits of going with Cisco the way I see them:

  1. Your switches and routers are probably already Cisco so you have a relationship.
  2. WAAS interfaces seamlessly with the other Cisco gear.
  3. The best way to interface a WAN accelerator is WCCP. And it was actually developed by Cisco.
  4. The Cisco appliances are tunnel-less and totally transparent (I met someone that had Riverbed everywhere – a software glitch rendered ALL WAN traffic inoperable, instead of having it go through unaccelerated which is the way it is supposed to work. He’s now looking at Cisco).
  5. WAAS appliances don’t mess with QoS you may have already set.
  6. The WAAS boxes are actually faster in almost anything compared to the competition.

And now for the inevitable benchmarks:

Depending on the latency, you can get more or less of a speed-up. For a comprehensive test see this: http://www.cisco.com/application/pdf/en/us/guest/products/ps6870/c1031/cdccont_0900aecd8054f827.pdf

Another, longer rev: http://www.cisco.com/web/CA/channels/pdf/Miercom-on-Cisco-WAAS-Riverbed-Juniper-competitive.pdf

Yes, this is on Cisco’s website but it’s kinda hard to find any performance statistics on the other players’ sites showing Cisco’s WAAS (any references to WAFS are for an obsolete product). At least this one compares truly recent codebases of Cisco, Riverbed and Juniper. For me, the most telling numbers were the ones showing how much traffic the server at the datacenter actually sees. Cisco was almost 100x better than the competition – where the other products passed several Mbits through to the server, Cisco only needed to pass 50Kbits or so.

It is kinda weird that the other vendors don’t have any public-facing benchmarks like this, don’t you think?

However, since I tend to not completely believe vendor-sponsored benchmark numbers as much as I may like the vendor in question, I ran my own.

I used NISTnet (a free WAN simulator, http://www-x.antd.nist.gov/nistnet/) to emulate latency and throughput indicative of standard telco links (i.e. a T1). The fact that the simulator is freely available and can be used by anyone is compelling since it allows testing without disrupting production networks (for the record, I also tested on a few production networks with similar results, though the latency was lower than with the simulator).

The first test scenario is that of the typical T1 connection (approx. 1.5Mbits/s or 170KB/s at best) and 40ms of round-trip delay. I tested with zero packet loss, which is not totally realistic but it makes the benchmarks even more compelling. Usually there is a little packet loss, which makes transfer speeds even worse. This is one of the most common connections to remote sites one will encounter in production environments.

The second scenario is that of a bigger pipe (3Mbit) but much higher latency (300ms), emulating a long-distance link such as a remote site in Asia over which developers do their work. I injected a 0.2% packet loss (a small number, given the distance).

It is important to note that, in the interests of simplicity and expediency, these tests are not comprehensive. A comprehensive WAAS test consists of:

  • Performance without WAAS but with latency
  • Performance with WAAS but data not already in cache (cold cache hits). Such a test shows the real-time efficiency of the TFO, DRE and LZ algorithms.
  • Performance with the data already in the cache (hot cache hits).
  • Performance with pre-positioning of fileserver data. This would be the fastest a WAAS solution would perform, almost like a local fileserver.
  • Performance without WAAS and without latency (local server). This would be the absolute fastest performance in general.

The one cold cache test I performed involved downloading a large ISO file (400MB) using HTTP over the simulated T1 link. The performance ranged from 1.5-1.8MB/s (a full 10 times faster than without WAAS) for a cold cache hit. After the file was transferred (and was therefore in cache) the performance went to 2.5MB/s. The amazing performance might have been due to a highly compressible ISO image but, nevertheless, is quite impressive. The ISO was a full Windows 2000 install CD with SP4 slipstreamed – a realistic test with realistic data, since one might conceivably want to distribute such CD images over a WAN. Frankly this went through so quickly that I keep thinking I did something wrong.

T1 results
ftp without WAAS:
ftp: 3367936 bytes received in 19.53Seconds 168.40Kbytes/sec

Very normal T1 behavior with the simulator (for a good-quality T1).

ftp with WAAS:
ftp: 3367936 bytes received in 1.34Seconds 2505.90Kbytes/sec (15x improvement ).

Sending data was even faster:
ftp: 3367936 bytes sent in 0.36Seconds 9381.44Kbytes/sec.

waasT1

 

High Latency/High Bandwidth results

The high latency (300ms) link, even though it had double the theoretical throughput of the T1 link, suffers significantly:

ftp without WAAS
ftp: 3367936 bytes received in 125.73Seconds 26.79Kbytes/sec.

I was surprised at how much the high latency hurt the ftp transfers. I ran the test several times with similar results.

ftp with WAAS
ftp: 3367936 bytes received in 2.16Seconds 1562.12Kbytes/sec. (58x improvement ).

waaslat

 

I have more results with office-type apps but they will make for too big of a blog entry, not that this isn’t big. In any case, the thing works as advertised. I need to build a test Exchange server so I can see how much stuff like attachments are accelerated. Watch this space. Oh, and there’s another set of results at http://www.gotitsolutions.org/2007/05/18/cisco-waas-performance-benchmarks.html

Comments? Complaints? You know what to do.

D

On deduplication and Data Domain appliances

One subject I keep hearing about is deduplication. The idea being that you save a ton of space since a lot of your computers have identical data.
One way to do it is with an appliance-based solution such as Data Domain. Effectively, they put a little server and a cheap-but-not-cheerful, non-expandable 6TB RAID together, then charge a lot for it, claiming it can hold 90TB or whatever. Use many of them to scale.

The technology chops up incoming files into pieces. Then, the server calculates a unique numeric ID using a hash algorithm.

The ID is then associated with the block and both are stored.

If the ID of another block matches one already stored, the new block is NOT stored, but it’s ID is, as is the association with the rest of the blocks in the file (so that deleting a file won’t adversely affect common blocks with other fles).

This is what allows dedup technologies to store a lot of data.

Now, why it depends how much you can store:

If you’re backing up many different unique files (like images), there will be almost no similarity, so everything will be backed up.
If you’re backing up 1000 identical windows servers (including the windows directory) then there WILL be a lot of similarity, and great efficiencies.

Now the drawbacks (and why I never bought it):

The thing relies on a weak server and a small database. As you’re backing up more and more, there will be millions (maybe billions) of IDs in the database (remember, a single file may have multiple IDs).

Imagine you have 2 billion entries.

Imagine you’re trying to back up someone’s 1GB PST, or other large file, that stays mostly the same over time (ideal dedup scenario). The file gets chopped up in, say, 100 blocks.

Each block has it’s ID calculated (CPU-intensive).

Then, EACH ID has to be compared with the ENTIRE database to determine whether there’s a match or not.

This can take a while, depending on what search/sort/store algorithms they use.

I asked data domain about this and all they kept telling me was “try it, we can’t predict your performance”. I asked them whether they had even tested the box to see what the limits were, and they hadn’t. Hmmm.

I did find out that, at best, the thing works at 50MB/s (slower than an LTO3 tape drive), unless you use tons of them.

Now, imagine you’re trying to RECOVER your 1GB PST.

Say you try to recover from a “full” backup on the data domain, but that file has been living in it for a year, with the new blocks being added to it.

When requesting the file, the data domain box has to synthesize the file (remember, even the “full” doesn’t include the whole file). It will read the IDs needed to recreate it and put the blocks together so it can present the final file, as it should have looked.

This is CPU- and disk-intensive. Takes a while.

The whole point of doing backups to disk is to back up and restore faster and more reliably. If you’re slowing things down in order to compress your disk as much as possible, you’re doing yourself a disservice.

Don’t get me wrong, dedup tech has it’s place, but I just don’t like the appliance model for performance and scalability reasons.
EMC just purchased Avamar, a dedup company that does the exact same thing but lets you install the software on whatever you want.

There are also Asigra and Evault, both great backup/dedup products that can be installed on ANY server and work with ANY disk, not just the el cheapo quasi-JBOD data domain sells.

So, you can leverage your investment in disk and load the software of a beefy box that will actually work properly.

Another tack would be to use virtual tape – doesn’t do dedup (yet, but it will since EMC bought Avamar and Adic, now Quantum, also acquired another dedup company and will put the stuff in their VTL, you can get the best of both worlds) but it does compression just like real tape.

Plus, even the cheapest EMC virtual tape box works at over 300MB/s.

I sort of detest the “drop at the customer site” model data domain (and a bunch of the smaller storage vendors) use. They expect you to put the box in and if it works OK to make it easier to keep it than send it back.

Most people will keep the first thing they try (unless it fails horrifically), since they don’t want to go through the trouble of testing 5 different products (unless we’re talking about huge companies that have dedicated testing staff).

Let me know what you think…

D