FCoE, iSCSI, and InfinBand – Oh My!

In Storage, Technology by J Michel Metz1 Comment

Over at Etherealmind, Greg Ferro wrote a couple of pieces on FCoE that beg to be responded to. Obviously Mr. Ferro understands a great deal about the storage world, but when it comes to FCoE there are some points that I must disagree with.

The first article, “FCoE Isn’t a Replacement for InfiniBand, It’s a Cheaper Copy That Customers Will Buy,” identifies InfiniBand as the benchmark for unified fabric protocols within the data center. Ferro’s conclusion is that FCoE is nothing special, doesn’t offer anything that InfiniBand can’t do better, and makes an interesting assumption that QLogic and Mellanox have “repurposed [their InfiniBand silicon] for FCoE HBAs (sic).”

The very first problem with Ferro’s assumption here is that FCoE is intended to be a replacement for InfiniBand; it isn’t. FCoE is an extension of Fibre Channel, and has been marketed as such from the beginning. I can’t speak for Mellanox, but I happen to know for a fact that Q’s silicon isn’t repurposed IB.

How do I know? Because I marketed FCoE for QLogic before the first CNAs were even released.

Mr. Ferro makes a comparison between FCoE and iSCSI in his second article, FCoE is JUST a transition technology,” illustrating that the latter is definitely “good enough.” In short, he places FCoE on the one hand south of the quality benchmark for InfiniBand, but on the other hand not as cheap as iSCSI. He doesn’t actually explain what FCoE is supposed to be a transition between, but it appears to be stuck in-between iSCSI and InfiniBand.

Major issues with Mr. Ferro’s assumptions cascade into erroneous conclusions, and ultimately shows a misunderstanding of FCoE’s role in storage networking.

iSCSI vs. FCoE

It’s true that the fight between iSCSI and Fibre Channel have reached a similar religious-type war (think Mac vs. PC) among the storage networking geeks. The truth is, though, that FCoE is not necessarily a direct competitor to iSCSI any more than Fibre Channel is.

Ferro is correct when he says that iSCSI is good enough, competent enough (when configured properly), and that “LeftHand and EqualLogic have demonstrated that customers don’t always want FC.” But by and large there is a difference between the customers (SME) who gravitate toward the lower-cost iSCSI solutions and those (DC) who gravitate towards FC. Cost happens to be just one of many reasons why Data Centers choose FC, but once you reach a certain threshold (as you will do in a DC), performance of iSCSI becomes a significant concern.

Aha! I hear you cry. What about 10GbE? Well, there are two reasons why it’s not quite comparable to equate 1GbE with 10GbE. First, the original cost argument flies out the window. 10GbE is not cheap, and it isn’t ubiquitous. In fact, there are significant upgrades in HBAs, cabling (Cat6a, anyone?), and and switches. Whatever cost benefits you had with using your existing 1GbE switches and assets is now no longer viable.

The second problem is, again performance. iSCSI is not an encapsulation protocol in the same way FCoE is. That is to say, it maps SCSI frames onto Ethernet frames, and has significant header processing (up to Layer 4). As speeds increase, this overhead increases. Eventually, by the time that 40GbE and 100GbE arrive in the marketplace, the overhead will take up such a large percentage of processing time/cycles as to significantly diminish the cost/work ratio.

FCoE is a fully encapsulated protocol, however. That is, there is no need to break up the packets and then re-assemble them at the destination. This means two things in particular. First, there’s very little overhead (making it a better performer at higher speeds), and second, it’s not routable (which means it’s confined to a data center technology, where as iSCSI and FCIP are not).

InfiniBand vs. FCoE?

Ferro’s placement of FCoE as a competitor to InfiniBand is extraordinarily premature. While 10 Gb is fast, it’s nowhere near as fast as InfiniBand’s QDR speeds. Moreover, Ethernet (the “E” of FCoE) comes nowhere close to the ultra-low latency provided by IB.

This, when combined with the fact that IB is best suited for interprocessor communication than storage communication (there are very few IB SANs in the data center overall) means that we’re talking about different tools for different uses.

Ferro laments Cisco’s departure from the IB space, but he seems to ignore the reason why Cisco moved away from IB in the first place. As a Unified platform, IB has the speed to handle the multi-protocol traffic, but not the market infrastructure: after speaking with more than a thousand data center professionals within the last year and a half, not one person wanted to replace their Ethernet technology with IB.

Aside from the learning curve (the propeller spins the other way), paucity of IB SAN tools when compared to FC or Ethernet, there is the cost. While Ferro is right that the cost-per-packet sent is very low, the capital outlay is extremely high. QDR HCAs (the cards that go into the servers) are not cheap and commoditized like HBAs (the FC and iSCSI cards that go into server). Switches start at a couple dozen thousand dollars. Director-class IB switches begin in the six-figure range.

For a company that doesn’t have an existing IB SAN infrastructure the barrier to entry is the cost, despite how many packets they can send.

Conclusion

FCoE is an extension of Fibre Channel, first and foremost. It is an extension of the FC protocol into the future and holds great promise as an enterprise data center technology.

iSCSI is and will continue to be a solid storage networking protocol for lower-end and lower-speed networks that do not need tremendous bandwidth. iSCSI’s greatest enemy will be the rush to virtualization which is placing several dozen VMs on a single machine, each vying for 1Gb of Ethernet bandwidth. Even 10GbE will have to deal with the overhead issue, but 10GbE is not cheap and pretty soon you start getting into the cost/benefit comparison of 8- or 16Gb FC. If you’re going to jump to 10GbE, why not do 10Gb FCoE for the same amount of money and keep your existing FC infrastructure and tools?

InfiniBand SANs are not commonplace (in the same sense that iSCSI or FC SANs are). IB is best suited for server-to-server communication where latency and bandwidth requirements drive the schematic. The notion that FCoE is designed to replace IB cannot be supported by looking at either QLogic or Mellanox’s IB/FCoE offerings as both companies are moving ahead with dual strategies. Simply looking at Cisco’s abandonment of IB cannot be extrapolated to the IB/FCoE marketplace as a whole.

Comments

  1. Pingback: Brocade and Cisco agree on FCoE detail « J Metz's Blog

Leave a Comment