Home > Technology > OpenStack and Storage, a Response

OpenStack and Storage, a Response

No Fibre Channel allowed?

No Fibre Channel allowed?

During the recent OpenStack Summit in Atlanta, GA, I wrote a blog on the Cisco blog site called “Thoughts on #OpenStack and Software-Defined Storage,” arguing for the need to avoid reinventing the wheel when it comes to Software-Defined X and storage. My very good friend Stephen Foskett wrote a (fair and excellent) response, entitled Why OpenStack Doesn’t Need Fibre Channel Support.

Stephen’s argument, essentially, is that Fibre Channel, and the conventional IT that it represents, do not really have a place in the Open systems, such as the OpenStack ecosystem. Thing is, I think Stephen’s argument is a little too late. The cat has left the barn, and the genie has been let out of the toothpaste tube, the train has left the gate. Or something to that effect.

We’re Underway!

That ship has sailed!

That ship has sailed!

The issue – from an OpenStack perspective – doesn’t appear to be whether or not to include Fibre Channel, as Stephen indicates. Fibre Channel has been included (in the form of open zoning support in Havana, and zone management support in Icehouse, with additional blueprint proposals for Juno).

Instead, the question is related to the level and extent of which Fibre Channel can be managed and controlled by some type of orchestration layer using OpenStack infrastructure.

In other words, we’ve already started work with Fibre Channel in OpenStack, but the implementation is sketchy (for now). For instance, I would not recommend implementing zoning features in OpenStack at the moment due to some pretty significant bugs inside the Cinder code. This is not to say that they can’t/won’t be fixed, but obviously the work with respect to Fibre Channel is well underway.

Reinventing the Wheel

There is a key point to keep in mind as well: do we want to have a situation where we have dedicated networks to handle specific application types (e.g., such as deterministic storage applications)? We’ve spent the last six years moving from dedicated networks to converged networks specifically in anticipation of greater bandwidth capacity.

Does this mean that with OpenStack, I have to revert to 2007 designs? I certainly hope not.

At the moment OpenStack and Cinder appear to be handling the “sexy” components of DC design. Fibre Channel is not sexy, of course, but when it comes to transporting data I personally would rather have reliable over sexy any day of the week.

Having said that, I think that Stephen’s argument that other storage types (e.g., iSCSI) are “good enough” for the bulk of Data Center needs has merit, by the way. High throughput, lower latency, and greater software flexibility can, indeed, improve most Data Center applications in time.

The question for me, though, is that I thought that Software-Defined Networks (and Storage) were designed to improve Data Centers in general, not just the sexy applications, nor just for the “good enough” ones either. It appears that OpenStack, and Cinder by extension, have already decided that it’s important to manage the applications that require Fibre Channel connectivity.

That means, in turn, that if you’re going to do this, you better do it correctly.

You. Yes, you.

You. Yes, you.

At The End of the Day…

Not... quite.

Not… quite.

To that end, I actually do not necessarily disagree with Mr. Foskett (as a whole). And I’m not just saying that because he agrees with me for the most part too! :)

I am arguing, however, that since we have already begun to accept that applications are likely to need Fibre Channel connectivity in OpenStack environments, it’s extremely important that we do not reinvent the wheel, squeaky as it will become.

When I was in Atlanta, I saw and heard several conversations about how and what to do – questions that could be answered by asking the right people. It just so happens that those people have been doing this a long time, and belong to organizations like SNIA and FCIA.

None of this, by the way, should distract us from the key point of either Stephen’s or my original article: the relationship between storage and its corresponding network should not be ignored. Or, to put it more directly: ignore it at your own peril. There are reasons why storage networks have evolved the way they have, and to ignore those reasons because they’re not sexy enough is a significant risk that Data Centers should not – and cannot afford to – take.

  1. Craig Johnson
    May 29, 2014 at 10:54 am

    It is interesting – does this mean the end of SAN A/B designs? FC has always had a higher level of reliability, but at that additional cost. I love FC/FCoE, but it will be interesting if that pricing level can survive with iSCSI, etc.

    • May 29, 2014 at 10:58 am

      I confess I’m not sure what you’re asking, actually. Can you be more specific in terms of OpenStack deployments?

  1. May 29, 2014 at 10:49 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

domdelfinosblog

Technology, Life, Cars, Randomness

shadub

My brain's been battered!

Virtualization for Service Providers

Using virtualization as a technology enabler for public, enterprise, customer-facing environments

COMMERCIAL MATRIX

Scientia Dispello Timor - Knowledge Dispels Fear

Jeep Accessories

Latest Jeep News

The Data Center Overlords

Where servers, storage and networking combine to form Voltron.

The Borg Queen

Jottings on the intersection of tech and humanness

backofbeyonds

You seek her here, you seek her there... hang on, where's she gone?

Wake Up and Smell the Content

A Converged Communications Blog

Blog Stu - Stuart Miniman

Social media and innovation simmering with storage, networks and virtualization

J Metz's Blog

The Dr. is in...

Follow

Get every new post delivered to your Inbox.

Join 2,664 other followers

%d bloggers like this: