Cost Savings with Cables for FCoE
In an earlier post I mentioned that I had crunched some numbers with respect to the kinds of Operating Expense (OpEx) savings were possible with FCoE. Now, I understand the risk involved when trying to discuss costs and possible ROI, since every installation is custom. However, I think that it shouldn’t be too much of a leap to take some of these considerations and apply them to any installation as a benchmark. Plus, I have pretty pictures.
So, there are lots of other places “out there” that discuss some of the benefits of FCoE, but let’s take a quick look at what a TOR (Top Of Rack) solution using the technology available with FCoE can do.
Let’s start off with a typical rack scenario where you have a SAN A and B configuration for redundancy, and a conservative estimate of 2 NICs and 2 HBAs per server. I say conservative because it is not uncommon to see many, many more NICs per server:
In our example we have a relatively small example of only 10 servers, but they are complete with redundant NICs and HBAs. Again, in many deployments there are several servers which have several additional attached ports, but we’re trying to keep the numbers simple.
With FCoE, we are combining the functionality of the FC and Ethernet NICs onto one Converged Network Adapter (CNA), and as a result we wind up reducing not only our cables but simplify our connectivity and management schemes as well:
Now here is where we get to the interesting bits.
Power and Cooling Costs
Most people don’t think of cables as a money-sink. They’re a necessary evil, and as a result the power needed to run them is seen as negligible in the grand scheme of things.
However, let’s take a look at what happens when you start looking at some of the power differences between FCoE cabling and other 10GbE cabling. Namely, let’s look at what happens when you compare TwinAx® cabling with Cat 6A.
It turns out that Cat 6A is a power hog. Who knew, right?
Believe it or not, the cable sucks up 8 watts of power on each end. That’s 16 watts of power just to run one freakin’ cable! Your bathroom light probably runs at 25 watts, and here you have a cable that doesn’t even light up taking up nearly that amount (if it does, you’ve got bigger problems on your hands).
On the other hand, TwinAx® cabling takes .1 watts to run. Total. Yes, that was a decimal point in front of that “1”. You’re looking at 160x less power to run a single cable.
So what does this mean in real terms? Let’s suppose that your power company is really nice and friendly and gives you a huge discount on your power. Let’s say $.10/kwh. Now, last time I checked that was a really low price, but it’s been a while. Your own mileage and warm-and-fuzzy relationship with your power company may differ.
But when you add up the costs of running a single Cat 6A vs. TwinAx® cable over the course of a year, the difference is pretty striking:
Even if you get a pretty big discount (less than $.10/kwh), the ratio between the two approaches remains the same.
But look what happens when you break it down over a full rack. Or several racks. Over thousands of servers:
When you consider that for every watt of power you need to have a watt of cooling, the savings double. Congratulations, you’ve just reached the bonus round where the savings really adds up. This is just for cables! I haven’t changed anything else in the schematics.
It’s important to note that the TwinAx® cable is limited in length, which is why it’s ideal for a TOR solution. But by eliminating the optical cable intra-rack, and without the added expense of individual SFP+ transceivers (SFP+ modules are built into the cable), it winds up being a more efficient use of assets.
But Wait, There’s More!
The key reason to focus on cabling is that when you wire for multiple protocols, you’ll have the opportunity to run whatever type of traffic that’s you wish. FCoE, iSCSI, NAS, CIFS, NFS, and on and on. This “wire once” aspect provides you with an incredible flexibility and agility that means that your data center is the coolest kid in town (see what I did there? Eh? Eh?).
What this means is that you have actually simplified the data center while, simultaneously, expanded its capability. All while reducing the operational costs in order to do so.
What’s even better is that it’s possible to “bolt on” the technology into existing systems, and evolve the existing infrastructure without any loss of functionality or utility of existing equipment.
What’s not to like?
You can subscribe to this blog to get notifications of future articles in the column on the right. You can also follow me on Twitter: @jmichelmetz