In an earlier post I mentioned that I had crunched some numbers with respect to the kinds of Operating Expense (OpEx) savings were possible with FCoE. Now, I understand the risk involved when trying to discuss costs and possible ROI, since every installation is custom. However, I think that it shouldn’t be too much of a leap to take some of these considerations and apply them to any installation as a benchmark. Plus, I have pretty pictures.
So, there are lots of other places “out there” that discuss some of the benefits of FCoE, but let’s take a quick look at what a TOR (Top Of Rack) solution using the technology available with FCoE can do.
Let’s start off with a typical rack scenario where you have a SAN A and B configuration for redundancy, and a conservative estimate of 2 NICs and 2 HBAs per server. I say conservative because it is not uncommon to see many, many more NICs per server:
In our example we have a relatively small example of only 10 servers, but they are complete with redundant NICs and HBAs. Again, in many deployments there are several servers which have several additional attached ports, but we’re trying to keep the numbers simple.
With FCoE, we are combining the functionality of the FC and Ethernet NICs onto one Converged Network Adapter (CNA), and as a result we wind up reducing not only our cables but simplify our connectivity and management schemes as well:
Now here is where we get to the interesting bits.
Power and Cooling Costs
Most people don’t think of cables as a money-sink. They’re a necessary evil, and as a result the power needed to run them is seen as negligible in the grand scheme of things.
However, let’s take a look at what happens when you start looking at some of the power differences between FCoE cabling and other 10GbE cabling. Namely, let’s look at what happens when you compare TwinAx® cabling with Cat 6A.
It turns out that Cat 6A is a power hog. Who knew, right?
Believe it or not, the cable sucks up 8 watts of power on each end. That’s 16 watts of power just to run one freakin’ cable! Your bathroom light probably runs at 25 watts, and here you have a cable that doesn’t even light up taking up nearly that amount (if it does, you’ve got bigger problems on your hands).
On the other hand, TwinAx® cabling takes .1 watts to run. Total. Yes, that was a decimal point in front of that “1”. You’re looking at 160x less power to run a single cable.
So what does this mean in real terms? Let’s suppose that your power company is really nice and friendly and gives you a huge discount on your power. Let’s say $.10/kwh. Now, last time I checked that was a really low price, but it’s been a while. Your own mileage and warm-and-fuzzy relationship with your power company may differ.
But when you add up the costs of running a single Cat 6A vs. TwinAx® cable over the course of a year, the difference is pretty striking:
Even if you get a pretty big discount (less than $.10/kwh), the ratio between the two approaches remains the same.
But look what happens when you break it down over a full rack. Or several racks. Over thousands of servers:
When you consider that for every watt of power you need to have a watt of cooling, the savings double. Congratulations, you’ve just reached the bonus round where the savings really adds up. This is just for cables! I haven’t changed anything else in the schematics.
It’s important to note that the TwinAx® cable is limited in length, which is why it’s ideal for a TOR solution. But by eliminating the optical cable intra-rack, and without the added expense of individual SFP+ transceivers (SFP+ modules are built into the cable), it winds up being a more efficient use of assets.
But Wait, There’s More!
The key reason to focus on cabling is that when you wire for multiple protocols, you’ll have the opportunity to run whatever type of traffic that’s you wish. FCoE, iSCSI, NAS, CIFS, NFS, and on and on. This “wire once” aspect provides you with an incredible flexibility and agility that means that your data center is the coolest kid in town (see what I did there? Eh? Eh?).
What this means is that you have actually simplified the data center while, simultaneously, expanded its capability. All while reducing the operational costs in order to do so.
What’s even better is that it’s possible to “bolt on” the technology into existing systems, and evolve the existing infrastructure without any loss of functionality or utility of existing equipment.
What’s not to like?
.
You can subscribe to this blog to get notifications of future articles in the column on the right. You can also follow me on Twitter: @jmichelmetz
Comments
J,
Please forgive me if I’m wrong, but your math on the calculation of cables seems a bit off. You describe a 10-server situation with two NICs requiring 40 cables. I believe the total number should be 20 cables. The same error appears to me to have been repeated in the calculation of FC cables.
I agree with your post that there are tremendous savings opportunities, but want to be clear about the math. Would you kindly clarify?
Thank you,
John
John,
I believe J is describing a redundant NIC and redundant FC configuration, so each server has two LAN cables and two FC cables.
Jeremy
Jeremy is correct on the redundancy. Two dual-port adapters for both Ethernet and Fibre Channel (4 ports x 10 servers) = 40 cables.
Thank you Jeremy and J,
I appreciate the clarification. I was trying to look at the picture for clarification, but that didn’t help and I couldn’t find anything in the text about the hypothetical NICs and HBAs being dual-ported.
Mahalo nui loa (Thank you),
John
No problem. Thanks for pointing it out – sometimes what makes sense in my head doesn’t always come across. 🙂
J – while I don’t disagree with the general concept, a few comments.
Same savings if you converge on 10GbE with iSCSI or NAS.
For customers that have a large infrastructure of CAT6 cabling, the rip and replace is not very attractive. The power and cost for 10GBase-T is dropping fast, especially now that Cisco has entered the market with the Catalyst line (and should have Nexus support in the future).
For customers building a new data center, I recommend that they determine if ToR/EoR is really the best solution, or if they should wait for core directors supporting FCoE and going all optical. Looking out the next 5-10 years, 40 and 100Gb Ethernet will be part of the picture and neither the CAT6a nor the SFP+ Twinax are part of the solution for 40/100GbE today.
Stu – absolutely correct, and thanks for giving me an opportunity to clarify.
The primary assumptions of the piece were to examine what happens when you have multiple cabling for FC as well as Ethernet, and what could happen in terms of savings if the same traffic load were held using an FCoE scenario. I think that if you have an iSCSI or NAS scenario any FC savings might just be negligible. 😉
EOR/TOR solutions are an architectural consideration that needs to be fleshed out for each DC; I completely agree. If that didn’t come through in the article then thank you for reiterating it – it’s very important.
Your comment about 40/100G is well-taken, of course. I think the value of such speeds going to the edge might not survive a cost/benefit test within that time frame, however, but then again I’m just guessing (off the top of my head). I haven’t crunched any numbers on it yet. In any case, you’re absolutely correct about what “future-proofing” actually means as we starting getting into those speeds.
J Michel,
you tried to figure the power cost savings with the optics only.
I watched for 10 Ge adaptor and found following power usage
dual port sfp+ direct attached twinax = 8,6 watt
dual port cat6a = 19 watt
So the difference of one port would be at about 5 watt, both ends of a cable would be about 10 watt. But maybe switchside ports will have more optimized power usage.
It is true that different optics and cable types would have different costs/savings. Since I wrote this article, Cisco has put out an end-to-end TCO calculator that can take these costs of cabling and transceivers into account, and even provide a more holistic view of what it can mean to be on a consolidated system. https://express.salire.com/Modules/Analyses/Edit/config.aspx