Okay, so I confess. I hadn’t heard of Nethra either; it’s a semiconductor company focused on delivering imaging and video solutions for broadcast, medical, and surveillance markets. So why did they announce a release of networking and storage products competing directly with FCoE?
For a company that describes itself as “primarily engaged in providing computer programming services on a contract or fee basis,” the idea that Nethra is involved in the hardware space may seem a bit of a conceptual stretch. Nevertheless, the company won the award for Professional Media and Entertainment System for their FastPipe storage system at the 2010 Storage Visions Conference (a partner program to the International CES).
According to Nethra, the “FastPipe transport adapter enables PCIe bus extension over 10GbE Layer 2 Fiber moving PCIe switching functions into the cloud.” Attaching their proprietary adapter to their proprietary Azores-100 SSD storage provides Nethra with an ability to mimic locally-attached configurations in a cloud environment, they claim.
Technologically speaking, Nethra is mum on the specifics. They claim that multi-protocol convergence (FC, iSCSI, and InfiniBand) can be pushed onto off-the-shelf 10GbE switch technology, but they don’t explain how. Apparently it has to do using NEC’s ExpEther FastFabric switching technology, which virtualizes PCIe I/O across servers. The result, they claim, is low host CPU overhead (don’t know how much), low latency (don’t know how low), and any PCIe 2.0 x8 slot – which provides the Server OS the drivers of remotely connected devices. Interestingly, it’s not clear which OSes are supported.
From the limited information available online (I don’t have the patience to contact the company for basic tech specs about their products), it appears that the technology uses a virtual PCIe switch implemented via VLAN to speed up interconnections between devices. For the application space they play in – visual data, movies, 3D applications, rendering – this makes sense.
In essence, it seems that the technology merely extends the PCIe capability across the network directly to storage. Pretty cool. But where does FCoE come into play? Does Nethra’s FastPipe technology compete with FCoE on a 1-to-1 basis?
The answer is… it doesn’t.
First of all, FCoE is not a Direct-Attached protocol. The standards don’t allow for it (and it’s a switched-based technology anyway) and isn’t really applicable to the type of user applications Nethra’s aiming for. FCoE is a data center protocol and is designed for local, massive build-outs that evolve existing Fibre Channel SANs into the future. A FC data center would need to consider switching to a 10GbE infrastructure, which leaves questions about rip-and-replace.
Second, FCoE’s great strength in terms of bandwidth sharing comes from PFC – Priority Flow Control – a non-hierarchical method of creating QoS channels for traffic to prevent protocol interference. VLANs don’t dedicate bandwidth to specific protocols, so even if you have (as Nethra claims) FC, iSCSI, and InfiniBand traffic you would still need a Enhanced Ethernet (EE) switch to handle the correct 802.1az standard. So, without that enhanced aspect to Ethernet transmission, Nethra’s solution is still limited with the time-share problems.
Third, FCoE is a complete encapsulation protocol. Let me explain what this means.
The standard Fibre Channel frame is 2112 bytes. The standard transmission unit for Ethernet is 1500 bytes. Last time I checked it takes some pretty fancy shoe-horning to slam 2112 into 1500.
To that end, FCoE uses “baby jumbo frames” (at 2.5k) to place the entire FC frame inside of an Ethernet frame to be sent – as is – across the wire. Many (most? all? I haven’t done enough market research to find out what percentage exist) 10GbE Switches have support for Jumbo frames, but you need some sort of mechanism to place the FC frame into the Ethernet frame. That mechanism is called the Fibre Channel Forwarder (FCF). Guess what: off-the-shelf 10GbE switches don’t have an FCF. If they did they’d be an FCoE switch.
So, Nethra must somehow break apart the Fibre Channel frames to place them into Ethernet frames. how is this done? FCiP? Nethra isn’t telling (well, not at least without calling them up and talking to a system engineer, apparently). Chopping up FC frames and putting them back together again – in order – takes processing power. The faster the link, the more processing power you need. Nethra doesn’t publicly publish its numbers, so it would be interesting to see what kind of comparisons can be made between FC traffic via FastPipe vs. FCoE.
Finally (though there are a few more items that I’ve thought of but brevity restricts working a full analysis), and perhaps most importantly, Ethernet is a lossy environment, where Fibre Channel is lossless. SCSI doesn’t do well with error correction, and so FC ensures in-order delivery (FCoE does the same through the use of the PAUSE capability built-in to the Ethernet 802.3x Standard). How does Nethra guarantee in-order delivery? Their system is proprietary, so how much are they willing to share the mechanisms of how they accomplish this?
Like I said, we could go further into detail about some of the specifics (e.g., comparing InfiniBand latency numbers to 10GbE, applicability to virtualized environments), but that falls outside the scope of this article. For now it suffices to say that Nethra’s technology appears to be suitable for the specific video market segments but not to general data center implementation, which is where FCoE is designed to fit.
Note: Interested in more perspectives about FCoE? Enter your email address to the right and receive notice of new posts.