HPE X10000 Deep Dive – Differentiation For Unstructured Data

At HPE Discover Barcelona 2024, HPE released the Alletra Storage MP X10000, the latest in our new line of shared hardware platform storage offerings.

It’s an innovative new platform specially made for unstructured data, and a long time in the making. This is HPE tech, not a partnership.

The initial workloads this solution is aimed at are anything requiring fast S3 performance, including AI workloads, data lakes, cloud native app development and high speed restore and backup.

It has several innovations such as RDMA for object, and is highly differentiated – plus, allows this kind of technology in a smaller possible starting capacity instead of only focusing on the huge side of the scale.

As usual, my aim is not to regurgitate basic information but rather to explain the true technical differentiation and get people excited about the possibilities on offer here. 

The summary of the X10000 benefits are:

  1. Disaggregation flexibility for separately expanding compute and/or capacity
  2. Ability to scale down and not need huge capacities to get good performance
  3. Balanced read/write performance and low latency for all workloads
  4. Flexible, fully container-based architecture that opens up tons of possibilities for running customer code inside the storage solution.

Let’s get to it:

Continue reading “HPE X10000 Deep Dive – Differentiation For Unstructured Data”

Why it is Incorrect to use Average Block Size for Storage Performance Benchmarking

Just a quick post to address something many people either get wrong or just live with due to convenience.

In summary: Please, let’s stop using average I/O sizes to characterize storage system performance. It’s wrong and doesn’t describe how the real world works. Using an average number is as bad as using small block 100% read numbers shown in vanity benchmarks. Neither is representative of real life.

Using a single I/O size for benchmarking became a practice for a vanity benchmark and to provide a level playing field to compare multiple products.

But, ultimately, even though the goal of comparing different systems is desirable, using a single I/O size is fundamentally flawed.

Continue reading “Why it is Incorrect to use Average Block Size for Storage Performance Benchmarking”

InfoSight: How Nimble Customers Benefit from AI and Big Data Predictive Analytics

In past articles I have covered topics about how Nimble Storage lowers customer risk with technologies internal to the array such as:

Now, it’s time to write a more in-depth article about the immense customer benefits of InfoSight, Nimble’s Predictive Analytics platform.

Continue reading “InfoSight: How Nimble Customers Benefit from AI and Big Data Predictive Analytics”

Nimble Cloud Volumes Removing Risk From Cloud Storage

As of February 27th 2017, Nimble Storage announced Nimble Cloud Volumes (NCV), marking Nimble’s entry into the cloud space.

For those short on time: Nimble Cloud Volumes is block storage as a service, works with compute from AWS & Azure, and avoids the significant block storage drawbacks of those two providers. In essence, it is enterprise storage as a service, at a competitive price, while retaining all the cloud conveniences.

Continue reading “Nimble Cloud Volumes Removing Risk From Cloud Storage”