Brocade and Evaluator Group’s Really, Really Bad FCoE Study

In Storage, Technology by J Michel Metz6 Comments

It’s been almost four years to the month that I took on Gartner’s horrible anti-FCoE diatribe, and since then I’ve joined Cisco, written a few more blogs, and even gone toe-to-toe with the technology’s largest critic – Brocade. For the most part, the tete-a-tete has been cordial, if a little snarky (just the way I like it!), and generally falls within the general margin for error afforded to marketing excesses.

Something happened this week, however, that brought up the ugly spectre of blatant dishonesty that exposed a disturbing trend of vendor-media collusion that I feel harms people, especially those who would otherwise be the most helped.

For that reason, I’m posting these thoughts here. These are my personal thoughts, and do not reflect those of my employer, my manager, my friends, family, or the direction of the technology I work with (disclosure for those that don’t know me: I’m a Product Manager for Storage working for Cisco – at least at the time of writing this blog!).

So, something snarky this way cometh.

In Theory…

Those who work in the IT industry (and other industries too, I suppose) understand that there is a natural, if potentially unhealthy, relationship between vendors, analysts/testing labs, and media/reporters. Vendors want to be able to provide some sort of tangible evidence that they have the best solution over their competition. There is an inherent distrust that a vendor will provide a “warts-and-all” analysis of their own products, and everything is head-and-shoulders above anything that the competition could ever dream of producing.

a-ridiculous-number-of-americans-believe-in-crazy-conspiracy-theories

To that end, they often hire independent labs to verify specific configurations that they are pretty sure will win. This is, of course, the conceit – that no testing facility is going to test for free, and the person who funds the tests is going to have a specific desired outcome that will cast them in a favorable light. Often these labs are affiliated with analyst firms, who provide running commentary about what these results are supposed to mean.

Once a study is published, it is presented for all the world to see. At that point trade press is supposed to read and evaluate these studies for merit and summarize to a more general audience. They are also supposed to be more educated about the topic than their readers (after all, why bother writing about it otherwise?) and provide a sense of context that may offset the inherent bias which naturally accompanies the study.

There is a conceit that technical tests are not 100% unbiased, but we rely on trade media to call out vendors and analysts when they go too far

Honest testing involves making the hard decision whether or not to publish a study if it doesn’t go your way. Honest reporting examines a study based upon its internal and external reliability, and doesn’t just regurgitate the vendor’s abstract or key findings.

At least, that’s the way it’s supposed to work, because that’s what is supposed to be in the best interest of all invested parties. Vendors are supposed to get customers they want to keep, Analyst/Testing labs are keen to promote themselves as credible and agnostic in testing methodologies, and Reporters want to be seen by their readers as more relevant than some other guy from some other magazine/website/paper, etc. It’s the checks and balances that are supposed to kick in so that customers who wish to be informed aren’t harmed by disinformation. Even so, we acknowledge that these reports are generally biased in some fashion or another, but we all try to give some suspension of disbelief along the way.

This week that system broke down horribly.

The “Study”

The answer is... four?

The answer is… four?

There is a great scene in the classic Rodney Dangerfield movie “Back to School,” where Rodney has his staff doing his homework on his behalf. One admin hands him a report for one of his classes, Rodney hefts it in his hands without reading it, and says, “Hmmm, feels like a B. Make it an A.”

On February 4, 2014 an analyst firm called Evaluator Group published a Brocade-funded study with the SEO-friendly title, “Evaluator Group Benchmarking Shows Superior Performance for Fibre Channel vs. FCoE for Solid State Environments.” If ever there was a study that felt like it had been created á la Rodney Dangerfield’s dictate, this would be it.

As of this writing, the article is still there. If you ever want a prime example of how not to do a research project, this is Exhibit A.

Quite frankly, every page was full of topological, architectural, technological and methodological errors that it’s difficult to know where to begin. Fortunately, many of the problems with the study have been taken apart with brutal efficiency by Tony Bourke (“I tell y’all, it’s sabotage!”) and Dave Alexander (‘Brocade’s Flawed FCoE “Study”‘). In case you’re interested on my technical take, look at the bottom of this post for the technical bonus round.

For now, however, just how bad is this study?

Let’s try to put this into an analogy. It’s as if you wanted to find out which gasoline (petrol, for my non-North American friends) was better, and used a Ferrari compared to a Lamborghini to do the test. And poured sand into one of the tanks. And drove one of them on a highway and the other through a residential area. By a driver who’s only experience behind the wheel was a 1985 Yugo.

Yes, it’s that bad.

“Funded by Brocade”

To the Evaluator Group’s credit, they do admit that Brocade commissioned the study and funded the ‘research.’ However, they swear up, down, backwards and forwards that this was an “independent” study with no outside influence whatsoever:

“That doesn’t mean the results were skewed – Evaluator Group senior partner Russ Fellows said his group conducted the tests at its labs without vendor interference – but Brocade may not have released the results if FC did not come out a clear winner.”

Riiiiiiiiight. Pull this leg and it plays Jingle Bells.

Here are some of the low-lights of the study that make it very, very difficult to believe Brocade had nothing to do with writing its content.

Gen 5: The study uses the phrase “Gen 5, 16Gbps FC switch.” Gen 5 is Brocade’s marketing term for their latest platforms, it is not an industry standard term (despite its recent adoption by QLogic and Emulex).

[Update: It appears that there isn’t consensus among those companies, either. Emulex also says that “Gen 5” also includes 8G FC. (PDF)]

One would think that an “independent” study would be looking to at least pretend there was no bias from the marketing side.

MDS: The study is peppered with “Evaluator Group comments,” which take every opportunity to pronounce that Cisco MDS switches would “require additional power and cooling,” and have “higher latency… worse than the results found here,” even though no Cisco MDS switches were tested.

How’s this for a statement in a so-called “unbiased” report: “Actual latencies would likely be higher for an all-Cisco deployment using FCoE… connected to Cisco MDS switch, than the tested environment.”

Uh, this was a protocol test, right? Not a Cisco test? Right? Right?

Brocade Feature Sets: In what was perhaps a subtle, and yet still bizarre inclusion, EG decided that it would pull in some of the promotional material for Brocade’s switches that have nothing to do with the tests. For example, check out this description of the “common FC SAN equipment” to describe the testing environment:

1x (Non HA) Brocade 6510 FC SAN switch for storage connectivity:

  • Gen 5 FC, supporting 16Gb FC with additional ports on demand
  • Management options using web-based tools, or advanced management with Brocade Fabric Vision – Setup and deployment wizards with Dynamic Fabric Provisioning tool
  • Includes Brocade ClearLink diagnostic ports (D_Ports) to identify optic and cable issues
  • High availability uses redundant, hot-pluggable components and no-disruptive upgrades

Man oh man, where to begin!? Can someone please tell me how this laundry list has anything to do with the “Common FC SAN equipment” testing environment? Did EG somehow feel that they would need to have “additional ports on demand”? Did the “Brocade ClearLink diagnostic ports” have anything to do with the test at all?

No, of course not.

I highlight one particular offensive element, however, bolded it and put it in red. Remember, this is a description of the testing environment, not supposed to be a marketing brochure. The environment is not highly available, but then EG turns around and implies that it is. At worst, it appears that EG is saying that Brocade’s switches are highly available even if you deliberately make them not.

60lu9As if that weren’t bad enough, EG doubles down later in the study (p. 24) with the High Availability:

“The FC environment was able to automatically create a trunk group between FC switches, while the FCoE environment did not. In this respect, the tested Fibre Channel environment had significant ease of use and configuration advantages for HA and performance compared to the FCoE environment.” (emphasis added)

I’ll stop there. It’s very important to note that EG published a schematic of the topology that they used, and reinforce the “findings” that one 16G link was all they needed to trounce FCoE. Because I do not have permission to simply copy and paste the graphic (which would be easier and I wouldn’t be accused of trying to modify their scenario), I’ve put my own visual of the image they used in the technical bonus section below.

In this way EG is saying that the Brocade FC environment is highly available, automatically. They are also saying that they have more links than they admitted to. Here’s what this means:

  1. Either they used 2 links for 16GFC that were automatically aggregated between switches, making the links highly available (as on p. 24), and thus were less than truthful about their configuration, or
  2. They only used one link as they say, and they are less than truthful about their high availability configuration and conclusions.

I leave you to be the judge.

The truly amusing thing about this is that Brocade reports the results of this study as if they’re surprised at the findings.

Reporters, where art thou?

It is clear that EG did not understand what they were doing. They even admitted as such:

“The setup of the FCoE environment required approximately 8 hours of time. Several issues were encountered while configuring the UCS equipment. Additional assistance was requested from a VAR with certified UCS engineers.” (p. 10)

Mind you, the FCoE configuration they were using was stock out of the box. Good grief, the first time I did a hands-on configuration of a UCS it took me under an hour to configure the UCS, the Nexus 5k upstream, and establish multi-hop FCoE to an existing FC SAN.

Dude. If it takes me less than an hour to do all that, you are doing it wrong.

However, that didn’t stop them from reporting that FCoE – the protocol, that is – was singularly responsible for the difficulty in setup and configuration.

You do not have to be overly technically-minded in order to see that there are problems here. Thing is, there are media articles written by people who are technically-minded with absolutely no challenge whatsoever to its veracity.

Chris Mellor immediately published an article on The Register singing the praises and showing absolutely zero consideration that, perhaps, he had even glanced at the study. [Update: Chris wrote a followup article on Tuesday, Feb. 11, 2014]

Dave Raffo published on TechTarget another article simultaneously that the study goes against the conventional wisdom about cable reduction, but apparently it didn’t raise any red flags there, either.

James Sullivan from TomsITPro regurgitated the findings, all of which appears to have indicated that he merely read the executive summary (which was better than most people got, I guess).

SFGate simply reposted the Evaluator Group Press Release as-is.

And they’re not the only ones. TechTarget, SearchStorage, etc. all started sending out tweets about this study with not one “technical” reporter ever seeming to ask the question “um, what?” Even the most ardent detractors of the FCoE protocol concede that it really shines in the access layer where you don’t need to have both Ethernet and Fibre Channel cables. And yet, EG’s backwards (and counter-intuitive) conclusion that FCoE required more cabling didn’t result in even one article pointing out this glaring problem. Not one.

The only people who seemed to get it was Tom Hollingsworth from GestaltIT, and Greg Schultz from StorageIO, neither of whom surprises me.

Obviously, the system is broken.

Don’t get me wrong. I like some of these publications. Hell, I’ve even been tweaked by Chris Mellor on a number of occasions. According to him, I’ve “sniffed” when I didn’t get my way and “harrumphed” when I pointed out that there is a huge difference between a marketing phrase and a technical standard.

happy happy joy joyHey, it’s all in good fun, and meh, what difference does it make? It makes for great sensational press and brings people’s attention to a seemingly obscure conflict between two vendors for a day or so. I certainly don’t take it personally. In fact, I didn’t care that Brocade wanted to re-brand 16GFC as Gen 5; what I cared about was that they (and Chris Mellor) called it a standard. To his credit, Chris got Brocade to admit that it was “only Marketing,” and I was happy.

In fact, Chris contacted me via Twitter to see if I would be interested in doing an interview on the merits of the study. I asked since I’ve already “sniffed” and “harrumphed” what I would be doing this time. 🙂

He didn’t respond to my gentle teasing, though. He said that he wanted me to answer questions as to why I thought the design was flawed, so I sent him a couple of insights, perhaps to see where his head was at, but as of this writing he has not yet responded to me directly.

Somehow I don’t think that he’s going to write an article about me “guffawing” at Brocade’s desperation.

Credibility Obliteration in 3… 2… 1…

It’s well-known that Chris likes Brocade. Believe it or not, that’s fine. There are a lot of good people at Brocade, and they have certainly have done some great technological work from time to time. He has always preferred working with them: even though I admit that I was a bit surprised when, in a previous article  referring to me that he called up Brocade and interviewed them about what I had said.

But there is a difference between being a fan and being a sycophant.

[Update: while I was writing this post I happened to notice that Chris tweeted an update on his position:]

The issue at hand goes far beyond a single study that is methodologically and ethically unsound. The real concern I have is that if something this bad, this egregious, isn’t called out by the so-called “watchdogs of technology,” what hope do the non-technical and the uninitiated have to find out the truth?

I think it’s a bit premature to suggest collusion or conspiracy here. Obviously we know that Brocade pays for play (hence the EG fiasco). Anyone who has been around for the past dozen years or so understands the way that it works with them.

In fact, there is precedent. In 1999 Brocade commissioned a study from KeyLabs that was similarly flawed, and the resulting flap caused the company’s credibility to spiral downwards so bad they ultimately could not recover. They key difference there was that the funding was buried deep inside the report, but KeyLabs was never able to shake the reputation that they were unreliable as a valid testing agency.

RIP KeyLabs

RIP KeyLabs

At the time, however, there was a willingness on the part of the technical press to take KeyLabs and Brocade to task for this. Businessweek ran a full article on the controversy and issues with the study, rightfully putting Brocade and KeyLabs in the position of defending their methodology. Unfortunately their testing process underwent ongoing scrutiny (see, e.g., how it colored the interpretation of a later study unrelated to Brocade) and KeyLabs’ reputation was irreparably harmed. If they had decent research in their portfolio, they certainly aren’t remembered for it now.

Where were these checks and balances this time? Someone, somewhere should have read the thing before writing an article on it. Since most people who would be interested in such a study (those already pro-Brocade or pro-Cisco aren’t really the audience) probably are looking for some “neutral” information, the true audience is someone who may genuinely be curious as to the outcome.

It’s that audience that has been the most severely underserved – and not just by the research, but by those who claim to have their best interests at heart. Too romantic? Possibly. But it seems to me that if you really out to sell advertising on your site or in your magazine, and you lose readers because they can’t trust what you write, you’re doing yourself a disservice as well.

Speaking of credibility, I should point out that I was offered the chance to talk with Russ Fellows, the author of the study. He seemed cordial and willing to discuss, and I really must be fair that he has continued to make himself available for further conversation, should I so choose.

However, it was I who cancelled the meeting, not him. The more I thought about what I hoped to get out of the conversation, the less I understood what my goals were. The report is terrible, so would I ask him to delete it and disavow it? Um, no. Would he volunteer such a thing and pay back Brocade the money that they earned? Highly unlikely. Would he print a retraction? Doubtful.

Then what? What could I reasonably ask from him? As I didn’t have an answer I saw very little reason to risk wasting the man’s time (or mine). But it definitely should be pointed out that he has made himself available.

Sad State of Affairs

There are lots and lots of technologies that I don’t know anything (or at least, enough) about. OpenDaylight, VMware’s NSX, OpenCompute, etc. I don’t have the time to get to the level of expertise on my own that I currently have about storage networking (which is an ongoing lesson for me, every day). I rely very heavily on third-party sources, analysis, and summaries to help me get off the ground and understand what these technologies are, and why they’re important.

It’s getting harder and harder to trust that the information I get is even close to being reliable. If something like this EG study can get through without even a single raised eyebrow from the technical media, what hope do I have to understand something even more unfamiliar?

There are people that I trust, and reports that have merit. There are those who care about quality, but they seem to be few and far between. I still have hope, though, that someone will remember why they got into the business in the first place and return to working in their constituents’ best interests.

DING! DING! TECHNICAL BONUS ROUND!

Popeye1Man, it has been so hard not to rip this document based upon methodology alone, and I’ve had to hold back on using specific examples because I really wanted to focus on the bigger picture. But as Popeye once said, “I’ve taken all I can stands, and I can’t stands no more!”

In order to understand why things are so bad from a technical standpoint, it’s useful to know what they said they did versus what they actually did versus what they could have done.

Remember, this was supposed to be a test of protocols – Fibre Channel versus FCoE – not a topology test or architecture test. There is a damn good reason for that. Two, actually. First, an architecture test involves, effectively, HP versus Cisco blade enclosures, which is not what Brocade gets much value out of promoting (other than the fact that they have a switch for one and not for the other). Second, if you’re going to do an architecture test you’ll want to make sure that the architecture configuration is sound, and this certainly wasn’t.

So, let’s focus on just what was going on with the protocols.

Here’s the comparison they say they made:

What they claimed they tested

What they claimed they tested

There is a couple of issues right off the bat here, and it doesn’t take an expert to see what they are. Remembering that this is supposed to be a protocol comparison, just how much FCoE is actually in this test? Go ahead and look. I’ll wait.

They have 2 links for FCoE between the UCS and its switch (if you’re not familiar with the UCS platform, the switch is part of the UCS system and is called a Fabric Interconnect). EG implies that they are aggregating the bandwidth between the FCoE links (for a combined 20G of throughput). In reality, however, this is not what they were testing. From an actual test perspective, it looked like this:

What they actually tested

What they actually tested

The report fails to explain that when connecting Cisco devices and Brocade devices together, it is not possible to aggregate the links. In other words, you can’t simply throw two 8G FC links together and make one big 16G link. However, EG implies that they did precisely this:

“Note: for this test, a single 16Gbps FC connection from the BladeSystem to the Brocade FC switch was used, compared to 2×8 Gbps FC connections from the 6248 interconnect to the FC switch. This was done to provide equivalent nominal performance and was not representative of a production configuration.” (p. 11, italics in the original)

Where did the link go??

Where did the link go??

This is outright deceit. It is not possible to configure “2 x 8 Gbps FC connections” in this manner using the equipment provided, and they certainly did not “provide equivalent nominal performance.” In fact, the consequence of this decision so greatly affected the performance of the network that all the other results “found” by the study can easily be explained away by this bottleneck. This is the “sand in the tank” I mentioned above.

Moreover, when traffic flows up the FCoE link, it can either be statically pinned or configured into a port channel (i.e., aggregated).

If they’re statically pinned, and the blades are populated vertically, one FCoE link won’t get used (hat tip to Tony Bourke for confirming this for me). Because EG didn’t properly identify how the UCS was configured, there’s no way of knowing whether or not they were doing this correctly or not (since they admitted they didn’t really know what they were doing with the UCS, it’s unlikely that they did).

That is, we have one 10G FCoE link being squeezed into a single 8GFC link before the road widens enough to have 16GFC access to the storage. But, the problem is even worse.

As I’ve written before, the actual data rates are not the same as the numbers on the cable. Whenever you put information on a wire it needs to be encoded, and all three versions have different encoding schemes, which means that from a throughput perspective, you have a massive bottleneck smack dab in the middle:

Putting the squeeze on

Putting the squeeze on

This is, of course, assuming that EG was unable to aggregate the link. If they actually did aggregate the link, then the problem is even worse! Try looking at it like traffic flows. It doesn’t matter if you’re driving on a highway that lets you go 100 mph – once you hit a construction zone that drops you to 35 you aren’t going to be able to go as fast as you thought you could!

Even tighter squeeze

Even tighter squeeze

The phrase sucking the ocean through a straw comes to mind. Because of this discrepancy, the effective data rate was throttled with up to 3-1 over-subscription in the middle of the network, which is nowhere close to good network design.

All of this completely begs the question: If this was supposed to be a FCoE vs. FC test using SSD, why didn’t they actually do it?

After all, the configuration could very easily have been this:

Ah, what could have been...

Ah, what could have been…

Sure, they would have had to use an HP rack server (instead of a blade server), but it’s a much more elegant design and far more consistent with what they’re claiming to show. This isn’t to say that there wouldn’t be differences and possible issues between the Brocade VDX and the 8510, but at least it’d be a much fairer protocol comparison!

thrown under the busIt’s easy to see why Brocade would not have wanted to do this. Despite the fact that they want (very badly, it appears) to prove 16GFC is superior to 10G FCoE (there’s a shock, eh?), they do not want to necessarily throw their own products under the bus.

Instead what they wanted to do was throw Cisco under the bus, and did so at every possible opportunity. They decided to go for broke and unwisely try to make a Cisco versus HP comparison and go to great lengths to configure a system that would deliberately skew the results to not just show superiority, but effectively bury FCoE technology so far deep into the ground that it would never survive.

Evaluator group doubled down on this configuration for every conclusion. They arbitrarily added in more equipment, and then criticized “the FCoE environment” for increased power and cooling (p. 12), complexity (p. 11), response times (p. 13), CPU utilization (p. 16), response variance (p. 15) and, once again, high availability (p. 17).

If this is EG’s non-partisan, non-influenced opinion it is evident they have one hell of an issue against Cisco.

Perhaps the most laughable criticism of all is when EG explained that you needed more cabling with FCoE than you did with FC. Not only was this configuration not typical (or balanced), but they conveniently forgot that the FCoE links also transport all the Ethernet LAN traffic, while the Brocade FC environment couldn’t transport any.

Keep in mind that these findings were based upon a protocol comparison, one that had almost zero actual applicability in the test itself. Perhaps one of the most obvious examples of just how lopsided the results are occurs when EG talks about CPU utilization and latency.

CPU Here, Latency There…

EG decided, for whatever reason, to do a test of “real world” applications using a closed, proprietary testing suite, the details of which were not exposed in the report. Moreover, they tested 32k block sizes which is a rather odd choice (most tests of ‘real world’ application testing that I’ve seen are run at 2, 4, or 8KB blocks).

None of this matters.

Let’s assume, for the sake of argument, that their tests are 100% reliable and valid, and that 32KB blocks are the best estimator of workload capability. The truth of the matter is that EG didn’t even know what the UCS was doing.

On page 16, EG wrote:

“The primary factor for higher CPU utilization within the FCoE test was due to using a software initiator, rather than a dedicated HBA. In general, software initiators require more server CPU cycles than do hardware initiators, often negating any cost advantages.”

There’s only one problem. You have to specifically configure the UCS to use a software initiator, as it comes with a hardware one. While you can configure it that way, it is not considered best practice and certainly is not recommended. In an ongoing exchange on Twitter, Tony Bourke demanded EG defend its use of a software initiator for FCoE on the UCS, to which the response was to point to a Cisco document that identified the hardware as capable of creating “virtual” HBAs, to which EG took to mean software initiation.

Ultimately, if EG doesn’t know what they had configured, how can anyone else? [Update: Evaluator Group published an Addendum and answered the question that they did, in fact, use a hardware initiator. They admit that “referring to them as software initiators caused some confusion.”]

Hardware or software initiation aside, the results make sense once you start to understand that when you saturate a 6.8Gbps link (the 8G FC link EG set up), you’re going to get back-pressure as the CPU (or CNA) has to queue I/O. That queuing is also going to affect your latency, as the requests are not sent out immediately.

Ah, latency, we hardly knew ya…

I/O latency is as much a function of hardware capabilities – more so, actually – than any inherent protocol limitations. In fact, the margin for error between any two CNAs (the hardware adapters that reside in the servers) is far greater than any variation due to protocol. Even though EG had no idea which initiator they were using on the UCS, there would still be possible variances between the two setups based upon CNAs, especially if EG had configured one to be software-based.

Truth is, we will likely never know.

We also don’t know what other traffic were being sent. EG does not describe what kind of LAN traffic was being sent simultaneously or whether or not they controlled for that type of traffic. On the UCS, all traffic, both storage and LAN, were being sent over the same links. Do we know for sure that they had suppressed Ethernet traffic and were only testing storage? No, no we don’t.

This, of course, did not stop EG from making wild-eyed conclusions about the protocol:

“As a result of these findings, Evaluator Group believes that Fibre Channel connectivity is the best option for enterprises using high performance, solid-state storage systems.”

What about the rest of their ‘findings?'(p. 18) Let’s take a look:

“FC provided 50% faster response times as workloads surpassed 80% of SAN utilization”

By this point it should be obvious that EG either deliberately or, out of ignorance, misconfigured the system to specifically throttle the “FCoE setup.”

“FC provided higher performance with 50% fewer connections than FCoE.”

No, just because EG created links that went nowhere did not mean that FCoE required twice as many connections.

“FC provided higher reliability, due to automatic trunking when using 2 or more connections.”

Aha! So EG didn’t use just one 16GFC link! One more reason not to trust the configuration as described.

60m3n“FC used between 20%-30% less CPU than FCoE, (likely due to protocol offload from server CPU)”

FCoE, as a protocol, has no direct connection with CPU usage unless EG misconfigured the UCS in the first place. All other CPU overage falls in line with what you would expect from a heavily oversubscribed network architecture (i.e., forcing all the data to go through a single 6.8 Gbps link).

“FC used 50% less power and cooling than FCoE.”

Oh, come on. EG sets up a completely unrealistic and arbitrary topology where one link is FCoE and suddenly it requires 2x the power and cooling? How does the protocol use more power? If EG had set up the topology I drew out before, with HP rack servers using FCoE against one in which they were using, perhaps they could have staked a better claim here. Maybe if they had added in how much power an Ethernet LAN would have added to the FC side, it would have made more sense. As it was, this is pathetic.

Truth is, FCoE was never tested with solid-state storage systems by Evaluator Group. In fact, the more the report goes on about how bad the protocol is, the more questions they raise as to how much they actually understood about what they were doing.

The Only Thing We Have to Fear…

As Dave Alexander pointed out, 16GFC does have very real performance benefits over 10G FCoE. This in itself is not a shock, and quite frankly would be of little interest to anyone.

The real news is that despite Brocade’s protestations to the contrary, if FCoE was really as bad as they want EG to make it out to be, they wouldn’t be trying so hard to discredit it. After all, if the truth is on your side why go to such lengths to make stuff up?

Brocade can’t seem to get its story straight. Their first recourse is to say that “nobody is using FCoE.” Okay, let’s assume that’s true. Then why bother? No one is using AoE (ATA over Ethernet), but I don’t see technically flawed hit pieces on AoE trying to prove that point.

So, someone must be using it. Someone like the 25,000-or-so customers Cisco has using the technology (HP, btw, is number 2 in FCoE market share; Brocade is fifth). “Oh,” Brocade says, “that’s all in the server access layer.” So… they’re not using it, but they are using it? But what difference does that actually make? Is something only considered “adopted” if it completely eradicates all other forms of technology in the Data Center? Get real.

“FC has the most common deployments,” they finally say, but I have to confess I don’t know what that means. Yes, okay, a technology that is nearly 20 years old should have more common deployments than one that is 5 years old. That is just common sense. If this argument held any merit whatsoever in technology there is no way Moore’s Law would exist.

All in all, this study reeks of desperation on Brocade’s part, that they are willing to throw caution (and logic) to the wind, pay for – and publish – some of the worst testing I’ve ever seen in my life, and get other media firms to promote it. The risk certainly outweighs the reward, unless they were banking on no one actually reading the report and simply taking the headline as gospel.

The sad part of that, though, is that may be exactly what they were hoping for. What’s even sadder is that they may ultimately be right.

[Update: Evaluator Group has since posted an “explanation.” I’ll leave it to you to determine if the issues have been addressed.]

Comments

  1. Pingback: Is FCoE faster than Fibre Channel? Who knows? Just run your own tests | Techbait Tech News

  2. All the vendors do that. Cisco, with the technically empty Single Network Vendor benefits reports tries to push a “truth” that is not absolute. Implementing a multivendor network can bring some advantages, like better leverage negotiating prices and you can have the best equipment in class for each job (no, Cisco is not the best at all networking areas. For example, Cisco´s WAN optimization is way inferior than Riverbed´s. “Independent studies” seen to “forget” some details depending who is funding.
    This post is nicely written and does a good job remembering us all that this is a common practice, and showing all the holes in the “study”, but please remember that this is not a competitor´s only practice.

    1. Author

      Hi Rafael,

      There is no question that vendors wish to promote their solutions/products, often at the expense of competitors. Cisco ran an embarrassing campaign a few years ago, in fact, where Juniper was the named target. Cisco was rightfully thumped for that, IMO.

      This is not about Cisco, nor is it actually about Brocade – as anyone from any company could conceivably sign off on a competitive example.

      The issue here is that usually these kinds of ‘studies’ are grounded in some semblance of reality. In this case, though, the research is so dishonest, so disingenuous, so wrong on every level, that there should have been serious recrimination on the part of the tech media. If Cisco ran something so flawed and dishonest, they/we should be rightfully flogged for it. The excuse that “everyone does something similar” is no excuse, and I hold my company to no lighter standard.

      Thanks again for the comment, though. 🙂

  3. Pingback: #FirmwareGate and #FCoEgate two months later | rsts11 – Robert Novak on system administration

  4. Pingback: Top 5 Reasons The Evaluator Group Screwed Up | The Data Center Overlords

  5. Pingback: Learn what Russ Fellows Doesn’t Know | The Data Center Overlords

Leave a Comment