Storage Forces

podcastOn June 13, I was a guest on The Hot Aisle Podcast, entitled “Fibre Channel is Dead, Right?” with Brian Carpenter and Brent Piatti. It’s by far one of the most in-depth technical conversations I’ve ever had on a podcast, and I thoroughly enjoyed it. Brian had read an earlier blog I wrote about The Grand Unification Storage Theory, and wanted to have a discussion about it.

While the title of the Podcast hints about Fibre Channel, we spent a great deal of time talking about NVM Express, and the implications for the storage industry. Specifically, we covered a lot of ground about the nature of storage and some of the competing forces that affect the decision-making process that prevents a one-size-fits-all solution.

During the podcast, starting at around minute 13, I started talking about how there are major pressures in the storage world that actually pull people in opposite directions. It’s a bit difficult to verbally describe a visual, so I thought that I would augment the podcast with some pretty pictures here.

The Storage Triangle

I’ll be the first to say that none of this is particularly new or earth shattering, but if you are familiar with the storage industry (data centers, service providers, hyper-converged, etc.) you will notice some pretty common themes – everyone has the solution you need.

However, it’s not quite that simple.

Storage Forces

Everyone who works in storage (or consumes storage, or analyzes storage, etc.) typically  understands – even intuitively – that it is a very big,  complex world in which they live. People outside of storage see it as this monolithic block (and yes, this includes many people inside my own company).

That’s because when we look at storage from the outside, we only see a black box system that needs to meet our needs for whatever we’re running – whether that be our own personal NAS or Google’s exabyte-level storage needs – and as a result we look at it through a particular lens.

For example, whenever you talk to someone who says that “Big Data” is the greatest problem in storage that needs to be solved today, they’re really living in the part of that triangle on the lower right. Whenever you talk about someone who is into “Hyper-Converged,” they’re pulling at storage from the top. When you’re looking at Micron/Intel’s 3D XPoint and NVMe-based PCIe technologies, you’re on the lower left.

Each of these types of storage solutions are a big deal, and the questions that arise – as well as the problems that are required – are often diametrically opposite from one another.

“Doing” Storage

When a person says that they “do storage,” it can mean a variety of things. Many people say that they “do storage” the way that I tell non-technical people that I “do computers.” It’s vague enough so that you can give them an idea of what I do without them needing to understand Data Center architectures. Plus, it’s a good starting point to see how far their eyes roll back in their heads.

CC000439

Vendors with storage products that “do storage,” on the other hand, take a similar shortcut. It matters a great deal who your audience is. If you talk to a server-based person, “doing storage” could mean direct-attached storage, high-performance devices that reside inside (or very, very close) to the server, or it could mean management constructs such as hyperconverged systems that are designed to be implemented and managed by server admins (as opposed to storage admins).

Talk to those storage administrators, though, “doing storage” often means the manageability of large, multi-million dollar setups of storage arrays that serve thousands of servers, with redundant capabilities up the wazoo.

Let’s not forget the storage network admins, for whom “doing storage” means getting storage from a server to one of those storage devices reliably and quickly. Whether you have a dedicated storage network or a shared storage network makes a big difference.

Thing is, there is a reason why large storage companies (such as EMC, NetApp, e.g.) have many different kinds of products. These trade-offs for performance, scalability, and manageability (not to mention price, which I’m omitting here for the sake of simplicity) are inevitable, and they have known for years that there is no “one-size” approach.

The same goes for storage networks. Some networks are faster than others. Some are easier to manage, some are more flexible. Some are purpose built, others are off-the-shelf. The trick is to match the right storage network with the correct storage solution that answers the particular forces that are affecting your needs.

BusinessMatching21

Two Out of Three

That matching isn’t trivial. Even assuming that you’ve solved that matching issue, there is still the problem with the forces themselves.

The best you can hope for out of this is to lasso two out of the three, and even that is very, very difficult. Most storage solutions that want to do their job very well, focus on getting as close to the point of that triangle as they can.

lasso storageNow, you may be wondering where other types of storage architectures fall – the general purpose, “enterprise-level” storage solutions. In reality, my little triangle is not axiomatic – it’s simply a lens through which to think about storage forces. However, most enterprise storage solutions need to balance between these three forces according to whatever use cases dictate.

Ultimately, that’s the purpose of my original post. When you have very unique needs in any organization, it is simply impossible to ensure that any one solution is going to solve any and all problems, because you will always need more. More capacity, more performance, more manageability, etc.

Beware The Storage Rodeo

In the past couple of years the demographic for storage has been changing, and that’s not necessarily a bad thing. However, the new storage architects come from a very different world. This new blood comes from the world of development and operations (DevOps), server administration, and programming. They’ve benefited from Moore’s Law to the degree that almost any solution is “good enough,” and any solution that they’re familiar with will satisfy the bulk of their needs adequately.

lynchVendors, on the other hand, don’t have that luxury. I bristle whenever someone comes to me and says “you don’t need storage,” because their patented so-and-so-technology solves all the world’s problems. In reality, they’ve managed to identify one of the forces pulling on the triangle and are ignoring the other corners. How conveeeeeeenient.

This creates a nasty wakeup call for consumers of storage when the natural forces that aren’t covered by “so-and-so-technology” inevitably come into play. Large scale storage becomes unwieldy, so new management structures need to be put in place. Every time you add management structures, your performance takes a hit.

Or, look at it from the other direction – high performance, in-server storage makes workloads go much, much faster. Faster completion times means more work per second, and a huge cost savings, so you want more of it. But scaling high-performance storage without losing that performance is very, very hard. Doing so while not making it fragile is even harder.

82149370_47Now think about this: that’s just the storage. What about the storage network? You know, that piece of technology that hooks everything together?

What happens when you’ve matched up your storage and network perfectly – only to find that the forces are causing you to be pulled in directions that no longer make your storage solution and the network align?

What do you do then?

This is the part that catches a lot of people out. They may find a solution that works right now, but changes over time affect not only their storage needs, but their ability to access that storage – a.k.a. the storage network. It’s not always about growth, either – sometimes you can get blocked by management obstacles or performance obstacles too.

I understand – probably as well as anybody can – the nature of storage protocol religious arguments. It’s very easy to get wrapped around the axle on the benefits of one protocol over another.

However, dismissing the discussion as “well, one is just as good as another” is equally as misguided. If anyone tells you that the storage network (or its protocol) don’t matter, they either don’t understand what they’re talking about or they’re selling you something.

What The Future Holds

Every year we have improvements that address each of these different storage forces. This is true for storage devices just as it’s true for storage networks. As these technologies improve, they will cover more and more use cases.

However, there have been several use cases we’ve never been able to handle, as well. Ultra-high performance persistent storage devices and networks (i.e., sub-microsecond latency) have been only a pipe-dream. Conversely, globally addressable storage (e.g., “cloud” storage) has only in the past few years been feasible with any real reliability or performance to be usable. These trends will also continue.

The forces on storage are constant and ongoing, and so we’ll always have a need for the right tool for the job, which means that there will likely never be “one storage (protocol) to rule them all.”

P.S.

For more content like this…

I’ve recently created a Patreon account. If you want to see more posts like this – about Synology or anything else to do with storage – please consider sponsoring future content.

Or, if you simply want to show your appreciation for helping you with this problem, you can do a one-off donation. All moneys go towards creating content. 🙂

[simple-payment id=”7192″]