Ate at Murphy’s Style Grill, in Red Bank, NJ

Will be demonstrating Cisco’s WAAS tomorrow in NYC, so today we spent some time going through a testing protocol so we can show people different things.

After we finished we had dinner at Murphy’s in NJ. Strange place. It’s not a classy steakhouse or anything – nor does it have aspirations to be one.

The menu is, to quote Kipling, as immutable as the hills. Apparently any substitutions or deviations are swiftly and sternly stamped out, as though they signify an impending revolution that threatens all that we hold holy. Dressing on the side? Heresy! Burn!

I got the 24oz Delmonico. I was urged not to ask anything about it, lest they bring out someone to take me to the back. He also suggested generous amounts of A1.

At least it was inexpensive (about $17) and properly cooked. If you’re looking for flavor and marbling, look elsewhere. Much of it looked like solid marble, though. Had to surgically remove a good amount of gristle.

Better than the steak at Bowling Green, I have to admit.


Ate at Flames in Manhattan

I was helping a client in the Wall Street District today with some rather obscure CIFS performance issues (Opportunistic Locks anyone? Berzerk BDCs causing issues? Multi-user Access DBs over WAN?)

Had to stay overnight (unplanned) so after putting in some solid hours I decided to get some steak, and NYC is the place to get decent steak.

Did some research and found out that Flames was walking distance from my hotel, so I went.

Got a T-Bone this time (usually go for strip or ribeye but the waiter insisted, even though they had far more expensive cuts on offer). Some creamed spinach and a small salad and I was set.

Flames is one of those fancy places where they cut your steak for you. At least they don’t feed you or, indeed, help you masticate.

Not that they would need to – the dry-aged steak had fantastic flavor and was reasonably tender (not the most tender but good). I wish it had been a tad less cooked but it was still great, and I devoured it in atavistic glory, almost beating the man-pelt on my chest in ecstasy. It’s been a while since I’ve had proper dry-aged beef.

The creamed spinach wasn’t too creamy or salty. The salad was just OK, I typically use salads for intestinal lubrication anyway and it served the purpose.

I did overhear some patrons asking for well done steaks, this is one of those places where they won’t try to talk you out of it, sadly. I think steakhouses should make you actually sign a waiver if you want to commit such culinary atrocity.

I also overheard a waiter trying to sell some $100 “Kobe” steak to some ladies, telling them how they massage the cows 4 times a day. I discreetly shook my head at them and they got the message.

Anyway – long story short, strongly recommended, and don’t dare order anything beyond medium-rare.

Now back to washing and drying my Superman underoos – I had no change of clothes and I’m writing this naked. It kinda is an appropriate image for this review though…


IBRIX at EMC World

I’ve known about IBRIX for a while, but it was refreshing to talk to a decent techie that knew the product. They have improved it a lot over the past year.

For the uninitiated, IBRIX can be either

  1. A network-based filesystem using the IBRIX client and protocol
  2. Also accessible using NFS or CIFS
  3. SAN-based parallel filesystem

The product’s claim to fame is it’s scalability and performance (realized by adding extra nodes “hot”). Their most famous client is probably Pixar, they replaced a ton of NetApp boxes with an IBRIX cluster and realized huge performance benefits and vastly reduced costs. I always liked cool filesystem technologies and this definitely falls under the realm of “cool”. Some highlights based on notes I took on my Blackberry during the session and questions I asked:

  • No limits on filesystem size (they have deployed single namespace filesystems several PB in size).
  • 300mb/s read, 200mb/s write on small box per node. Bigger boxes can do 1.2GB/s per node, of course your storage needs to be able to keep up.
  • No limit on the number of nodes.
  • Automatic rebalancing of data over time. When you add new disk you rebalance to keep things humming.
  • Dedicated ibrix backup node, works with 3rd party backup SW, can have many backup servers for backup speed.
  • Has snaps now (global), this was a failing of the product before since it was lacking snapshots.
  • No real limit on the number of files per FS.
  • Biggest file size they have tested on production is an 8TB file, no software limit.
  • Nodes use FC to access storage, clients use Ethernet.
  • Client on Windows or Linux, otherwise general NFS and CIFS. Client is fastest.
  • Your prod servers can be the ibrix nodes but very compute-intensive. They recommend the client (IP-based, bonded). or get an 8-core box.
  • There is no single lock manager – this is the coolest thing. There is global metadata and global locking, all nodes participate equally.
  • How are node failures handled? All nodes interchangeable. All see same storage. Storage allocated to remaining servers if you lose a node.
    Can lose all but 1 server.
  • Back-end storage size per node? Unlimited.
  • Multipathing per node? Powerpath works. Can do bonded GigE up to 8 ports per.
  • How are files allocated? The file inode contains the info concerning which node it needs to go to. Round-robin allocation or preferred servers per file type. Also if server over 50% full then it’s skipped.
  • All volumes accessible by all nodes.
  • Can stripe huge files across many nodes.

I’m stoked! I can think of so many uses for this product:

  1. Data mining
  2. Digital media
  3. Oil and gas
  4. Backups


Storage Virtualization – is there a point?

This has been bothering me for a while, and I think I’m not alone.

Hitachi has been making great progress with their virtualization gear, as has IBM, Falconstor before them, etc.

They claim you’ll be freed from the vendors’ shackles, achieve greater utilization of your arrays, simplify administration, cure cancer etc.

Well, here’s what I think:

  1. You will instead be shackled to the virtualization provider
  2. You won’t have a clue where your stuff is
  3. If you want to retire an array you could have problems (imagine creating a LUN composed of LUNs from 3 different arrays)
  4. You STILL have to use the management interfaces of the back-end arrays, since you still have to provision the storage. Instead of provisioning to hosts you provision to the virtualizer.


So, what have you gained, exactly?


EMC World: Replication Manager and Exchange 2007

Just attended a session. Seems like the new rev of RM supports 2007 fully. They also support Recoverpoint clones (or will, later this week).

For whoever is not aware of it, EMC Replication Manager is like a front-end that manages local replicas of your salient Exchange data for the purposes of backup and restore.

Can be fiddly to set up but if you have EMC gear and Exchange, you really should look at it.