In my first post, I was talking about how the SNIA Storage Developer’s Conference really met and exceeded expectations in a lot of ways. There were many shining moments, but unfortunately there were quite a few disappointing ones as well.
As this is a developer’s conference, it’s expected that the material is going to be technical in nature. In fact, that’s one of the things that I love about coming to conferences like this, because I get a chance to see and learn about things that I wouldn’t have the time to focus on in my day-to-day work. It’s also great to see people who are passionate about what they do.
The problem comes when presenters have spent so much time in their day-to-day work that they tend to forget that the audience may not be experts in that particular field, even if they are bright people in their own right.
I’ve gone off on rants before about poor presentation skills, but what gets frustrating is when I have been looking forward to a particular subject matter but the presenter shows a distinct apathy as to whether or not I understand what he is saying.
One look at the agenda for SDC shows numerous tracks. In fact, the coverage of storage topics is so broad that it could easily fill up a college degree for storage. SMB, Ceph, SCSI, iSCSI, Object-Based Storage, Software-Defined Storage, Fibre Channel, Fibre Channel over Ethernet (FCoE), RDMA over Ethernet (RoCE), BSD File Systems, Security… there is so much information that it is impossible for anyone to be able to walk into every one of these topics with a reasonable depth of knowledge.
Unfortunately, many presenters seemed to work from the perspective of “this is a developer’s conference so I don’t have to have it make sense,” which was truly disappointing. Personally, I may have a better-than-the-average-bear understanding of Fibre Channel and FCoE, but am a bit sparse on the Ceph and Object-Based stuff. I may understand SCSI and FC standards enough to talk at the byte level, but may not be as clear on the difference between SMB and TCP at the session layer.
In other words, just because I may have technical chops in some areas doesn’t mean I’ll have them in all of them, and it is imperative that the speakers appreciate that fact. The attitude that I seemed to get from some of the speakers – especially founders of companies who were now CTOs of those companies – was that “if you don’t get this you must not be technical and shouldn’t be at the conference in the first place.”
Exactly how are you supposed to sustain such a position? In some cases – like Ceph – there was only one-and-a-half sessions where the subject was covered at all. So, for instance, it was not as if this was a “Ceph” conference where you could pick up some information somewhere in the conference. Effectively it was the responsibility of the speaker to make sure that the audience was following along… if he cared at all whether or not they did, that is.
I went back and forth with several employees of Inktank on twitter regarding the session for these reasons. My biggest complaint of all was that the speaker – founder and CTO Sage Weil – went through his slides so fast that it was impossible to follow what he was saying unless you already knew what he was going to say.
My objection, though, wasn’t regarding the technical depth. It was regarding the fact that there was too much disconnect between the individual points of depth in the presentation, and that it was blitzed through as if Sage was running from Dementors of Azkaban.
It was absolutely impossible to grok what he was saying, because no one was allowed to see the bottom half of any screen for more than 1.27 seconds as he raced to get to his next point. Given that (as of this writing) the presentation has not been posted to the SNIA conference web page, not only can I not go back and learn at my own leisure, but the chances of me being able to tie what he said back in to what he’s written has now approached nil.
Part of the specific problem about the presentation for me, personally, was that it had to do with Geo-replication solutions, which is something with which I am vaguely familiar. In fact, I’ve written quite about about storage and distance myself. So, while I might not be an expert in Ceph, I at least had a starting point with which I was looking forward to using as a basis for understanding what he was talking about.
However, it became very clear to me that Sage didn’t understand how storage replication across distances worked at the enterprise level. In fact, it was almost as if he was assuming a completely virgin problem, much like in high school Physics class where they tell you to “assume a frictionless environment.”
To me, this is a ‘brute force’ means of trying to convey information. That is, you shovel data upon the audience and try to overwhelm them in an effort of bewildering them into thinking there’s more coherence and substance than there actually is. The more information you pile on and the more details you expose, while simultaneously avoiding placing the technology into a ‘big picture.’
As it turns out, this was something that wasn’t limited to any one particular presentation or company. Felix Xavier, Founder and CTO of CloudByte, suffered much of the same fate in a way. His presentation, “Hosting Performance-Sensitive Applications in the Cloud” is something that anyone who is seriously considering outsourcing their storage should be thinking about.
After adequately identifying some of the limitations with storage systems in general, Felix started talking about the need for “Software Defined Storage” to overcome them. Okaaaaaay…
I’ll get to the whole “Software Defined Storage” question in a little bit, but the problem here is that much of what’s happening is the same thing that has always happened: Someone finds a different way of saying the same damn thing and all of a sudden it’s supposed to make you sexier, brighten your teeth, make you lose weight, and pick up a hot babe.
If you’re going to identify a possible solution for a problem that you outline, it’s imperative that the definition be crystal clear as to what you’re trying to do. For me, the major complaint was with this slide, which was presented as the mechanics of how to solve the problems:
Somehow, this slide is suppose to be the culmination point, the coup-de-grace for explaining how software-defined storage “enables guaranteed storage multi-tenancy.”
I have a problem with this, however. First, I don’t exactly know where to look for the “how” in this solution. Cloudbyte may have a fantastic solution, but I sure wouldn’t know what the hell it is or how it worked by this (remember, this is the “Therefore…” slide, the “…and this is how you solve the problem” slide).
I understand that a virtual machine abstracts out CPU cycles, RAM access, and various ports on a server. I understand that a VLAN abstracts out the number of ports on a switch and helps shape physical traffic into logical Quality of Service (QoS). I get that, because you are abstracting out a logical construct from physical assets.
So, please tell me how you can abstract out a metric like IOPS or Throughput or Latency? Are you trying to divide up the latency itself so that you force more latency in for some applications, rather than ensuring that you get the lowest latency possible? I don’t get it.
Moreover, if you’re just throttling capabilities (so as to enact various levels of service, for instance), how are you really defining the storage network for a multi-tenant environment? Aren’t you just taking a physical wire and deciding how to implement it?
Revealing the Ugliness Beneath
One of two things is happening here. At the least charitable, a cynic could say that presenters are trying to keep audiences in the dark as to what is going on, falling back on a condescending “awwww, was it too technical for the poor baby?”
Believe it or not, I do not take it that way (nor did I take Bryan’s tweet above in that manner). But the sad truth is that there is an almost egregious lack of care on the part of some of the presenters in the sessions to ensure that the audience is following along. This, in turn, makes me wonder if they realize that with all the competition for our eyeballs and earlobes, they will lose out in the long run as people fail to recall even the most basic premise of their presentations.
The other interpretation is, I think, far more damaging in the long run, as well as far more likely. I believe that many of these developers (and founders) have spent so little time outside of their own technology, focusing with their head down swimming in their own direction, that they have completely failed to see how far away from shore they really are.
What’s even worse is that many of these developers seem to think that “storage” is some monolithic entity where the only differentiator is the part they’re working on. It’s amazing to me how many people at this conference – and even outside of it – have blinders on just how large and intricate the storage ecosystem is.
In my final part, I’m going to discuss a little more about what happened when the subject of Software Defined Storage came up, and how I got roped into being on a panel at the conference. [Update: I published Part III on my Cisco Blog, as I thought it had more applicability to my current role.]