FCoE vs. iSCSI: The Cagefight! – Flexibility

This is the second in a series of posts designed to address some of the questions I’ve posed with respect to FCoE vs. iSCSI, in an attempt to take a detached view towards the pros and cons of each technology as it relates to measuring up in the data center. In this post, we will examine the question of whether iSCSI can provide the same type of traffic flexibility that FCoE can, for the same level of service.

In the last Cagefight! post, we looked at whether or not iSCSI had the capacity for measuring up performance-wise, all things being equal. What I found was that yes, all things being equal, it appears that iSCSI has the definite potential for meeting the performance needs of the average DC environment. There are still some performance questions that will need to be answered over time, but all in all iSCSI appears to benefit greatly from tuning techniques and the very nature of 10GbE.

All things are not, of course, equal. Performance is only one of the elements (albeit crucial) to be considered, and it’s not the only element that is considered to be part of FCoE’s “hype.”

Definition of Terms

When I bring up the phrase flexibility I understand that I have be very careful.

Obviously, flexibility can me a number of things. For instance, it can mean flexibility of deployments, flexibility of functionality, flexibility of configurations, and even flexibility of scope (e.g., within a data center versus long-distance connectivity).

Flexibility, in this case, refers to two major aspects of data center architecture:

  1. Giving administrators increased control over data flows, and
  2. A reduction in limitations on customization

I also have to be careful so that I don’t put my foot in my mouth up to my upper thigh, because I gave Gartner’s Joe Skorupa a stern talking-to because he confused FCoE for DCB and how that often means that you can easily create problems for people when you are describing one yet blasting the other. To that end, I’ll try not to duplicate Gartner’s errors when exposing “FCoE myths.”

Be vewy vewy careful...

To that end, I want to be clear here: I’ll be talking about the flexibility that Converged Enhanced Ethernet (CEE)/Data Center Bridging (DCB) brings to the table. FCoE in-and-of-itself is no more or less inherently flexible than iSCSI, which is why it’s critical to identify this caveat up front.

The reason why FCoE is hyped as flexible is because 1) people (like Gartner) confuse it with DCB, and 2) the modifications made to Ethernet in order to run FCoE make the environment more flexible.

FCoE is no more or less inherently flexible than iSCSI

A Little Bit o’ Perspective

Okay, so we’ve already answered the question, haven’t we? After all, iSCSI and FCoE is merely a method of transferring block-based protocols across a wire. Problem solved. Time to go.

Well, not quite.

As Jennifer Shiff points out, iSCSI has an excellent case for capitalizing on the 10GbE pipe and illustrates how the Dell-commissioned Forrester survey shows that there is an incredibly strong inclination by respondents to stick with iSCSI. After all, we’ve already seen just how good you can tweak the performance. What’s not clear, however, is how many of these users are either DAS or pre-existing iSCSI customers to begin with.

Handling the Big Questions

The reason why this is important is because DCs are more likely to stick with the technologies they know and understand. For instance, DAS houses are unlikely to want to jump to the additional complexity of a Fibre Channel SAN which would require the leap from not only a storage perspective but also a network management perspective. They are much more likely to have someone on staff who is a LAN administrator who could pull double-duty as an iSCSI SAN administrator (or try to).

In fact, I would be willing to wager that the quantity of environments that currently use FC may be outnumbered by the cheaper DAS and iSCSI environments, even if the total amount of storage itself in FC environments (not to mention capital investment) may be more. Disclosure – I don’t have those numbers handy at the moment, so I’m guessing, but conventional wisdom would indicate for every FC SAN environment there are many more DAS and iSCSI SAN environments for the simple reason that FC requires having an additional, different skill set.

If we accept those assumptions, it would make sense that there would be more people who would respond to the survey who would likely look at iSCSI as an option or alternative to DAS when moving to 10GbE. Is it unreasonable to expect that the percentage of respondents who would select 10Gb iSCSI is similar?

Lost in Translation

What is missing in the 10Gb iSCSI argument, however, is that one of the major assumptions from moving to DAS to 1Gb iSCSI cannot be directly translated into 10Gb; namely, that Ethernet is ‘free.’

Most IT organizations have 1GbE switches lying around of various sizes and capabilities. Even small organizations can put together a dedicated 1Gb iSCSI SAN network at little-or-no cost because they already have the hardware (cables and/or switches) or can get them cheaply, and software initiators are free.

When you move to 10Gb, however, suddenly the rules change. It’s not only the switches that companies have to be concerned about, it’s also the transceivers and 10GbE adapters/NICs. Suddenly we’re not talking free any more, and the cost differential between 10Gb iSCSI and 10Gb FCoE (in terms of hardware) is not quite so disparate.

So here’s the question, then: what is your data center growth strategy, really?

If your DC is going to grow significantly in the next few years, wouldn’t it make sense to look at how much it would cost to provide yourself with greater flexibility?

See, with DAS, you get DAS. By yourself bigger hard drives, RAIDs, and get used to manual data migration.

Piecemeal approach

With iSCSI, you get iSCSI. And NAS-heads available via iSCSI. Depending on whether or not you decide to take various best-practices advice, you’ll need to create separate networks which means additional cables and using up switch ports. Adding FC devices and networks means adding routers into the mix as well. There is also a limitation with the number of theoretical and practical hosts on an iSCSI network when compared to FC installations.

What about mixed (heterogenous) environments? If you currently have a mixed environment, wouldn’t it make more sense to consider upgrading the entire infrastructure to 10Gb rather than just one element?

With FCoE, You Get Eggroll

By now everyone who has done any research into FCoE has seen this graphic, or one like it:

Priority Flow Control

The key takeaway from this is that there are 8 non-hierarchical “lanes” which allow traffic types to be separated from each other. Congestion issues that affect lane 5, as in this example, do not affect other lanes. FCoE traffic can share the full bandwidth pipe for TCP/IP, iSCSI, iWarp for clustering, and others as well (e.g., VoIP). Some of these lanes can be lossy while others are lossless.

Now here comes the cool part.

Benefits of ETS

Because PFC provides a way to manage individual traffic classes, we have a way of isolating and controlling them. Coupled with Enhanced Transmission Selection (ETS) allows us to take those lanes and associate them with VLAN tag priority levels, prioritize traffic, and/or limit bandwidth per level.

ETS has some pretty interesting goals:

  • Classify frames, assign them to groups (called Traffic Class Groups – TCGs), and map each grou pto a .1Q VLAN priority level
  • Configure available bandiwth among the TCGs
  • Schedule transmission of frames
  • Permit Traffic Class Groups to have more than one traffic class

I started coming up with all the different permutations of what happened when you took the different TCGs and the prioritization rules that can be applied and had intended to list them here with a brief explanation, but quickly realized that there would be nothing “brief” about it! Each ETS device is required to supprt at least three TCGs, each with configurable bandwidth and VLAN priority level mapping. That, in turn, can be prioritized.

Yeah, baby, we got your flexibility right heah!

So What’s the Skinny, Ginny?

What does this mean? Well, Gartner says that this adds to complexity which, in turn, drives up costs. Okay, I’m willing to grant that assertion at face value.

However, any time you provide increased control, along with increased granularity of that control, you provide the means by which admins are able to customize and finely tune their environments. So while the potential for complexity does indeed increase, so does the opportunity to squeeze out a higher performance/cost ratio.

It’s important to note that this has to do with DCB/CEE, not FCoE per se. As we are looking at the underlying technologies as they relate to our decision-making processes, though, it’s very clear that by choosing a 10Gb iSCSI foundation over FCoE we are also missing some very compelling arguments for flexibility.

We can crow all we want about how much performance we can get by tuning and tweaking iSCSI, but that’s nothing compared to what a FCoE-based system can provide, because the underlying technology that enables FCoE is inherently more flexible, scalable, and cost-efficient.

In terms of a data center that is preparing for growth and the need for dynamic flexibility over time without the need to completely re-configure in the future, DCB/FCoE wins this cagefight match solidly and resoundly.

.

You can subscribe to this blog to get notifications of future articles in the column on the right. You can also follow me on Twitter: @jmichelmetz