On August 6, 2014, Steve “The Woz” Wozniak gave the keynote at the Flash Memory Summit in Santa Clara, CA. Like most of the people in the room, I was extremely excited to be able to see one of my childhood heroes (more on this later) up close and personal (more on this later too).
But first, a short clip of his talk, where he is asked to give advice to the engineers in the audience:
[I tried to clean up the audio a bit, but given the nature of the large room we were in don't expect documentary-style quality :-P] Read more…
Most people who know me, know that I am a rabid individualist. Nearly every morning for the past 7 months or so I’ve posted a quote of the day through my twitter feed; sometimes it is retweeted, most of the time they are not. Obviously the short ones are more likely to be retweeted. Nevertheless, these quotes (not all I agree with, by the way) are shared because they make me think, and perhaps they will make others think as well – if only for the span of time it takes to glance through a tweet in someone’s feed. Read more…
Okay, I know there are a lot of different explanations about how to describe these common components of networks (see, for instance, here, here, here, here, and here), but every once in a while I get a question about whether or not 40G will make things go “faster” in networks (often relating to FCoE and storage in general). Why write another one? I wanted to see how fast I could make this easy to understand.
Last night as I was trying to wind down to go to sleep, I had a brainstorm about an interesting visual (to me, at least) that might help explain some of the different concepts in moving data around. Your mileage may vary. Read more…
There is something I’ve been struggling with lately, trying to deal with some major issues of the pressure that comes naturally to a workaholic in high-stress environments. You may have been too; in fact a friend of yours may have sent you a link to this blog for one very important reason that you may have missed somewhere along the way:
You do good work. Read more…
I’ve been struggling with names (or rather, the act of naming) recently. As I get more and more involved into the aspect of Data Centers and Programmability, trying to become more familiar in a world that remains considerably alien to me, I begin to struggle with some of the rather buzzword-laden (but ultimately vague) nomenclature:
- Software-Defined Networks
- Software-Defined Storage
- Software-Defined Data Center
Yet, when you break down what each of these things mean, you start to realize that they become rather limited in what can actually be deployed. “Everything in software,” “Hardware means nothing,” “You need both hardware and software,” etc. Doesn’t this sideshow actually distract us from what we’re really trying to do?
I thought we were simply trying to make deployments easier and more flexible. We keep hearing about “lock-in,” but what it really means is that we don’t want to be locked into our own decisions. In other words, we want to change our minds, adapt as necessary, and not have to build a whole new Data Center just because our technological crystal ball during the financial crises of 2008 and 2009 forced us to make choices that we now make us feel trapped.
What’s wrong with just a “User-Defined Data Center?” I mean, I’ve always been a huge proponent of using the right tools for the job, but why swap out one limitation (hardware-based) for another (software-based)? If I’m a Data Center user, don’t I just want it to work the way I want it to? After all, that’s what all this hype is supposed to be, anyway. It’s my equipment, my applications, I want to define it.
It seems to me that all this stuff about “software-defined” still wants to withhold the actual decision-making power from the user and keep it in the hands of those who create the software, rather than any real liberation from “lock-in,” which much of the marketing hype wants you to believe.
Just a quick take about nomenclature, nothing more. We now return you…
During the recent OpenStack Summit in Atlanta, GA, I wrote a blog on the Cisco blog site called “Thoughts on #OpenStack and Software-Defined Storage,” arguing for the need to avoid reinventing the wheel when it comes to Software-Defined X and storage. My very good friend Stephen Foskett wrote a (fair and excellent) response, entitled Why OpenStack Doesn’t Need Fibre Channel Support.
Stephen’s argument, essentially, is that Fibre Channel, and the conventional IT that it represents, do not really have a place in the Open systems, such as the OpenStack ecosystem. Thing is, I think Stephen’s argument is a little too late. The cat has left the barn, and the genie has been let out of the toothpaste tube, the train has left the gate. Or something to that effect. Read more…
For much of this project there has been a lot of “one step forward, two steps back,” especially when it comes to things like brakes and suspension for Porkchop. My exasperation was mitigated a bit, however, when I made some excellent progress within a two weeks time with the help of my wife and a friend. In the short video below (no audio), you’ll see as we carefully reassembled the axles to the wheels, and then the axle assembly to Badger’s frame.
All in all, progress is a very satisfying motivation. As usual, there are before/after shots too.