Recently, many vendors announced the availability of large SSDs. It’s not extremely exciting – it’s just a larger storage medium. Sure, it’s really advanced 3D NAND, it’s fast and ultra-reliable, and will allow some nicely dense configurations at a reduced $/GB. Another day in Enterprise Storage Land.
But, ultimately, that’s how drives roll – they get bigger. And in the case of SSD, the roadmaps seem extremely aggressive regarding capacities, with 100TB per device coming.
Then I realized that several vendors don’t have large SSD capacities available.
But why? Why ignore such a seemingly easy and hugely cost-effective way to increase density?
In this post I will attempt to explain why certain architectural decisions may lead to inflexible design constructs that can have long-term flexibility and scalability ramifications.
Continue reading “Architecture has long term scalability implications for All Flash Appliances”
The idea for this article came from seeing various people attempt product testing. Though I thought about storage when writing this, the ideas apply to most industries.
Three different kinds of testing
There are really three different kinds of testing.
The first is the incomplete, improper, useless in almost any situation testing. Typically done by people that have little training on the subject. Almost always ends in misleading results and is arguably dangerous, especially if used to make purchasing decisions.
The second is what’s affectionately and romantically called “Real World Testing”. Typically done by people that will try to simulate some kind of workload they believe they encounter in their environment, or use part of their environment to do the testing. Much more accurate than the first kind, if done right. Usually the workload is decided arbitrarily 🙂
The third and last kind is what I term “Proper Testing”. This is done by professionals (that usually do this type of testing for a living) that understand how complex testing for a broad range of conditions needs to be done. It’s really hard to do, but pays amazing dividends if done thoroughly.
Let’s go over the three kinds in more details, with some examples.
Continue reading “Proper Testing vs Real World Testing”
I really resisted using the “flash in the pan” phrase in the title… first, because the term is overused and second, because I don’t believe solid state is of limited value. On the contrary.
However, I am noticing an interesting trend among some newcomers in the array business, desperate to find a flash niche to compete in:
Writing their storage OS around very specific NAND flash technologies. Almost as bad as writing an entire storage OS to support a single hypervisor technology, but that’s a story for another day.
Continue reading “Are some flash storage vendors optimizing too heavily for short-lived NAND flash?”
(Edited: My bad, it was 2TB/s, up from 1.3TB/s, the solution has been getting bigger and upgraded, plus the post talks about the E5400, the newer E5600 is much faster).
What do you do when you need so much I/O performance that no one single storage system can deliver it, no matter how large?
Continue reading “NetApp delivers 2TB/s performance to giant supercomputer for big data”
Before all the variable-block aficionados go up in arms, I freely admit variable-block deduplication may overall squeeze more dedupe out of your data.
I won’t go into a laborious explanation of variable vs fixed, but, in a nutshell, fixed-block deduplication means that data is split into equal chunks, each chunk given a signature, compared to a DB and the common chunks are not stored.
Continue reading “More FUD busting: Deduplication – is variable-block better than fixed-block, and should you care?”