FCoE vs. iSCSI: The Cagefight! – Flexibility

This is the second in a series of posts designed to address some of the questions I’ve posed with respect to FCoE vs. iSCSI, in an attempt to take a detached view towards the pros and cons of each technology as it relates to measuring up in the data center. In this post, we will examine the question of whether iSCSI can provide the same type of traffic flexibility that FCoE can, for the same level of service.

In the last Cagefight! post, we looked at whether or not iSCSI had the capacity for measuring up performance-wise, all things being equal. What I found was that yes, all things being equal, it appears that iSCSI has the definite potential for meeting the performance needs of the average DC environment. There are still some performance questions that will need to be answered over time, but all in all iSCSI appears to benefit greatly from tuning techniques and the very nature of 10GbE.

All things are not, of course, equal. Performance is only one of the elements (albeit crucial) to be considered, and it’s not the only element that is considered to be part of FCoE’s “hype.”

Definition of Terms

When I bring up the phrase flexibility I understand that I have be very careful.

Obviously, flexibility can me a number of things. For instance, it can mean flexibility of deployments, flexibility of functionality, flexibility of configurations, and even flexibility of scope (e.g., within a data center versus long-distance connectivity).

Flexibility, in this case, refers to two major aspects of data center architecture:

  1. Giving administrators increased control over data flows, and
  2. A reduction in limitations on customization

I also have to be careful so that I don’t put my foot in my mouth up to my upper thigh, because I gave Gartner’s Joe Skorupa a stern talking-to because he confused FCoE for DCB and how that often means that you can easily create problems for people when you are describing one yet blasting the other. To that end, I’ll try not to duplicate Gartner’s errors when exposing “FCoE myths.”

Be vewy vewy careful...

To that end, I want to be clear here: I’ll be talking about the flexibility that Converged Enhanced Ethernet (CEE)/Data Center Bridging (DCB) brings to the table. FCoE in-and-of-itself is no more or less inherently flexible than iSCSI, which is why it’s critical to identify this caveat up front.

The reason why FCoE is hyped as flexible is because 1) people (like Gartner) confuse it with DCB, and 2) the modifications made to Ethernet in order to run FCoE make the environment more flexible.

FCoE is no more or less inherently flexible than iSCSI

A Little Bit o’ Perspective

Okay, so we’ve already answered the question, haven’t we? After all, iSCSI and FCoE is merely a method of transferring block-based protocols across a wire. Problem solved. Time to go.

Well, not quite.

As Jennifer Shiff points out, iSCSI has an excellent case for capitalizing on the 10GbE pipe and illustrates how the Dell-commissioned Forrester survey shows that there is an incredibly strong inclination by respondents to stick with iSCSI. After all, we’ve already seen just how good you can tweak the performance. What’s not clear, however, is how many of these users are either DAS or pre-existing iSCSI customers to begin with.

Handling the Big Questions

The reason why this is important is because DCs are more likely to stick with the technologies they know and understand. For instance, DAS houses are unlikely to want to jump to the additional complexity of a Fibre Channel SAN which would require the leap from not only a storage perspective but also a network management perspective. They are much more likely to have someone on staff who is a LAN administrator who could pull double-duty as an iSCSI SAN administrator (or try to).

In fact, I would be willing to wager that the quantity of environments that currently use FC may be outnumbered by the cheaper DAS and iSCSI environments, even if the total amount of storage itself in FC environments (not to mention capital investment) may be more. Disclosure – I don’t have those numbers handy at the moment, so I’m guessing, but conventional wisdom would indicate for every FC SAN environment there are many more DAS and iSCSI SAN environments for the simple reason that FC requires having an additional, different skill set.

If we accept those assumptions, it would make sense that there would be more people who would respond to the survey who would likely look at iSCSI as an option or alternative to DAS when moving to 10GbE. Is it unreasonable to expect that the percentage of respondents who would select 10Gb iSCSI is similar?

Lost in Translation

What is missing in the 10Gb iSCSI argument, however, is that one of the major assumptions from moving to DAS to 1Gb iSCSI cannot be directly translated into 10Gb; namely, that Ethernet is ‘free.’

Most IT organizations have 1GbE switches lying around of various sizes and capabilities. Even small organizations can put together a dedicated 1Gb iSCSI SAN network at little-or-no cost because they already have the hardware (cables and/or switches) or can get them cheaply, and software initiators are free.

When you move to 10Gb, however, suddenly the rules change. It’s not only the switches that companies have to be concerned about, it’s also the transceivers and 10GbE adapters/NICs. Suddenly we’re not talking free any more, and the cost differential between 10Gb iSCSI and 10Gb FCoE (in terms of hardware) is not quite so disparate.

So here’s the question, then: what is your data center growth strategy, really?

If your DC is going to grow significantly in the next few years, wouldn’t it make sense to look at how much it would cost to provide yourself with greater flexibility?

See, with DAS, you get DAS. By yourself bigger hard drives, RAIDs, and get used to manual data migration.

Piecemeal approach

With iSCSI, you get iSCSI. And NAS-heads available via iSCSI. Depending on whether or not you decide to take various best-practices advice, you’ll need to create separate networks which means additional cables and using up switch ports. Adding FC devices and networks means adding routers into the mix as well. There is also a limitation with the number of theoretical and practical hosts on an iSCSI network when compared to FC installations.

What about mixed (heterogenous) environments? If you currently have a mixed environment, wouldn’t it make more sense to consider upgrading the entire infrastructure to 10Gb rather than just one element?

With FCoE, You Get Eggroll

By now everyone who has done any research into FCoE has seen this graphic, or one like it:

Priority Flow Control

The key takeaway from this is that there are 8 non-hierarchical “lanes” which allow traffic types to be separated from each other. Congestion issues that affect lane 5, as in this example, do not affect other lanes. FCoE traffic can share the full bandwidth pipe for TCP/IP, iSCSI, iWarp for clustering, and others as well (e.g., VoIP). Some of these lanes can be lossy while others are lossless.

Now here comes the cool part.

Benefits of ETS

Because PFC provides a way to manage individual traffic classes, we have a way of isolating and controlling them. Coupled with Enhanced Transmission Selection (ETS) allows us to take those lanes and associate them with VLAN tag priority levels, prioritize traffic, and/or limit bandwidth per level.

ETS has some pretty interesting goals:

  • Classify frames, assign them to groups (called Traffic Class Groups – TCGs), and map each grou pto a .1Q VLAN priority level
  • Configure available bandiwth among the TCGs
  • Schedule transmission of frames
  • Permit Traffic Class Groups to have more than one traffic class

I started coming up with all the different permutations of what happened when you took the different TCGs and the prioritization rules that can be applied and had intended to list them here with a brief explanation, but quickly realized that there would be nothing “brief” about it! Each ETS device is required to supprt at least three TCGs, each with configurable bandwidth and VLAN priority level mapping. That, in turn, can be prioritized.

Yeah, baby, we got your flexibility right heah!

So What’s the Skinny, Ginny?

What does this mean? Well, Gartner says that this adds to complexity which, in turn, drives up costs. Okay, I’m willing to grant that assertion at face value.

However, any time you provide increased control, along with increased granularity of that control, you provide the means by which admins are able to customize and finely tune their environments. So while the potential for complexity does indeed increase, so does the opportunity to squeeze out a higher performance/cost ratio.

It’s important to note that this has to do with DCB/CEE, not FCoE per se. As we are looking at the underlying technologies as they relate to our decision-making processes, though, it’s very clear that by choosing a 10Gb iSCSI foundation over FCoE we are also missing some very compelling arguments for flexibility.

We can crow all we want about how much performance we can get by tuning and tweaking iSCSI, but that’s nothing compared to what a FCoE-based system can provide, because the underlying technology that enables FCoE is inherently more flexible, scalable, and cost-efficient.

In terms of a data center that is preparing for growth and the need for dynamic flexibility over time without the need to completely re-configure in the future, DCB/FCoE wins this cagefight match solidly and resoundly.

.

You can subscribe to this blog to get notifications of future articles in the column on the right. You can also follow me on Twitter: @jmichelmetz

Follow, sponsor, or see more at:
Advertisements

8 Comments

  • Jason Blosil March 31, 2010 at 07:18

    Very interesting article. But, I left with the impression that you believe FCoE is the only protocol that would benefit from DCB. All Ethernet traffic types should benefit from DCB, including NFS, CIFS, and iSCSI. FCoE is great because it allows you to integrate existing FC investments onto a shared Ethernet network. But, the skill sets required to manage traditional FC are essentially the same as those required to manage FCoE. So, for a new deployment or for making a transition from DAS, FCoE requires knowledge and skill with FC semantics and tools.

    It’s the new data center environment that will influence the current definition of flexibility, scalability, and performance. Service oriented infrastructures (aka Clouds) will take advantage of native routing of IP protocols as well as the native use of virtual IP addressing for flexible management of virtual and physical resources. These features, along with support for both 1GbE and 10GbE make a very strong argument to move to IP based storage protocols. I would argue that iSCSI is more scalable, flexible, and higher performing than FCoE in that context.

    Many companies are looking into the performance characteristics of iSCSI and FCoE in congested Ethernet environments supporting DCB. Understanding that data will offer insight how to more accurately position both protocols. Regardless of whichever is deployed, both protocols offer a means of simplifying the data center by converging onto a single network technology for all storage traffic needs, which is a win for the end user.

    Reply
    • J Michel Metz March 31, 2010 at 07:44

      As I wrote the article I knew that there would be a risk that I was giving the impression that “FCoE is the only protocol that would benefit from DCB.” This is, as you accurately point out, untrue. Any SAN- or LAN-based technology can benefit from the enhancements made via DCB.

      But that’s my point. If you are going with iSCSI, the argument can be made that you are needlessly limiting yourself to iSCSI, whereas DCB (and by inclusion FCoE) you can have both – and more. DCB/FCoE provides greater flexibility (as I had defined it) and can allow for additional growth down the line should DCs become heterogenous.

      You are correct that DAS environments would require learning a new set of management tools as well as architectures. But if you are 100% DAS are we really talking about a true “data center?” It seems to me that there are several intermediary steps on that journey that shouldn’t be skipped over.

      I do disagree that iSCSI provides “a means of simplifying the data center by converging onto a single network technology,” however, for the reasons listed above and in other posts. I know of no FC administrator who wants to shift to iSCSI to handle those storage needs, for instance.

      Reply
  • Joe Onisick April 12, 2010 at 22:42

    Great article, simply put implementing a DCB capable 10GE network gives you the option to choose your storage protocol, or as importantly, not choose meaning use the right protocol for each individual application type.

    Reply
  • Scott Owens May 4, 2010 at 17:48

    Mr. Metz,

    You wrote:
    “I know of no FC administrator who wants to shift to iSCSI to handle those storage needs, for instance.”

    This was true of Novell administrators when faced with TCP instead of IPX/SPX.
    You are describing an individual who feels their needs or knowledge exceeds the corporations rather than asking if something they are not familiar with has a role where they work. I don’t know any FC administrators, I know storage administrators of which FC is part – so is disk layout, backup concepts, performance goals, all of which are more important than how blocks are delivered to servers.
    My company has a half dozen 9500 series fiber switches and 9216s. There is one guy who even can partially manage them (yes this is an issue)
    IP based storage … that can be run by most of the advanced services team who know vlans and SVIs too.
    Fabric A and Fabric B sound like some sort of parallel Banyan Vines and Appletalk network whose streams can not cross; Can I run both fabrics over one FCoe/CNA adapter ?
    If I were an FC vendor I would be pushing FCoe too – kind of how GM pushed Suburbans and Tahoes on folks who didn’t haul more than 6 bags of groceries.
    But if I were a server admin or storage guy (or their managers) I would certainly be taking a look at the price/cost comparison ratios of other technologies.

    Reply
  • Scott Owens May 4, 2010 at 17:49

    That last line should have said “price/performance” comparison ratios

    Reply
  • NOB4 August 12, 2010 at 22:14

    I find myself reading alot about this having spent time in both worlds; network, storage, and now network. What I can tell you is that iSCSI is being STRONGLY considered by some very very large and very very smart clients. These same clients operate separate FC_SANs and obviously LANs. The fact of the matter is Ethernet, IP, and now Switch Virtualization i.e. the elimination of Layer 3 hops and along with VPLS will eliminate the need for DCB for iSCSI. I take two 256 10Gbp port switches and virtualize them with 4 10Gb connections up to 70km apart. Now what? No only can vMotion be extended at Layer 2 but so can iSCSI. Their are only two ways to do that with FC today and it cannot be done with FCoE due to the single hop rules. FC requires DWDM or FCIP. What was the last protocol yes FCIP. So everyone can say what they want about the big debate. I can tell you with great certain that some customers are saying F..it. I have had enough with two separate networks, I have had enough paying twic, I have heard enough about the big debate, the debate for us is over iSCSI wins….

    Reply
  • Tony Ansley October 5, 2012 at 10:38

    You quoted:

    “We can crow all we want about how much performance we can get by tuning and tweaking iSCSI, but that’s nothing compared to what a FCoE-based system can provide, because the underlying technology that enables FCoE is inherently more flexible, scalable, and cost-efficient.”

    BUT…if you can promote iSCSI to the same DCB level as FCoE – via the iSCSI TLV, you get exactly the same DCB “flexibility” and control that you are assigning only to FCoE. Why not have iSCSI in its own CoS and then assign that class to a TCG via ETS that provides you with the priority and bandwidth controls you are only assigning to FCoE in this article.

    By doing so, iSCSI is basically separated out of the lossy TCG that most TCP streams get relegated to within the DCB/FCoE world.

    Reply
    • J Michel Metz October 7, 2012 at 01:48

      Just realized that my reply didn’t actually register. I hope this one takes.

      It’s important to note that iSCSI TLV is not a magic bullet for iSCIS architectural flexibility. What it does is provide a greater degree of configuration as it permits devices to exchange COS value, lossless parameters, and “type iSCSI” in the same fashion that FCoE does. From the perspective of being more or less flexible as a protocol, iSCSI TLV doesn’t add (or subtract) anything to the mix.

      This article was written when the question was whether having a L2 network solution (FCoE) vs. a L4 network solution (iSCSI) would provide inherent benefits or vice versa. If you use PFC on an iSCSI network, making it ‘lossless,’ you effectively give yourself simply another L2 network to design. I’m concerned that people who are looking at lossless iSCSI may be failing to realize they’re going into a ‘law of the hammer’ approach with their networks.

      When I have some time I intend to write more on this subject, the key of course is finding the time…

      Reply

Leave a Reply

%d bloggers like this: