Big Switch Networks CTO Rob Sherwood on SDN in 2015: The Time is Now

Big Switch Networks CTO Rob Sherwood touts the maturity and ease-of-use of Big Switch SDN solutions, the power of open switching platforms, docker, overlays and more

1 2 3 Page 2
Page 2 of 3

[Rob] You can grab something like that and either deploy our OpenFlow, or write your own OpenFlow, or write something else. We have people using it creating custom protocols. OpenFlow, the concept is extremely useful, which is let's expose the raw channel memory to an RPC. OpenFlow as a specific protocol, whether it's the ONF OpenFlow or just something else that does that same process, I think as the open source evolves, as things like the Open Network Linux take off, I think that specific detail will be less important.

[Art] There's all this language challenge around SDN. What exactly does SDN mean? In some cases it is only OpenFlow, in some cases it's only OpenFlow in the physical hardware, or other people define it as these really broad things. From my perspective, that all started from the first Open Networking Summit when as soon as SDN looked like it might be a hyped term, every vendor out there came back and said, "What we already have today is SDN. That new stuff coming out of those star-eyed visionaries over there in the academic space, that's not really SDN." What I saw is kind of as a result of that, there was a little doubling down on "No, OpenFlow is SDN."

At the same time, from the first ONS it was always very, very clear; SDN is not OpenFlow, OpenFlow is not SDN. What we're really trying to do is change the industry and introduce the opportunity to have all different types of new thought and figure out what works best in OpenFlow will be part of that. I think, from my perspective, that seems to me to be one of the sources behind some of the terminology confusion. I hope today that that's getting a little better. I was just wondering if you saw similar currents over these past few years?

[Rob] Absolutely. It's worth going back to the article where the term SDN was first coined. I was actually interviewed for that article.

[Art] Yeah, yeah, the MIT Tech Review. I remember that one.

[Rob] Basically the thinking from the other side was, "Well it's a little bit like software defined radios because you can maybe change some of the networking things so I'll call it software defined networking." I always tell people the term started out imprecise and didn't really get much better from then.

[Art] That's interesting that you mention that because I have been thinking the opposite, right? Software defined radio access network - I thought they pulled that term from SDN, fascinating thing to learn.

[Rob] The thing that's a little bit confusing is that networking has always been about hardware and software in combination. If you look at how a router or a switch is made, there's a lot of absolute dedicated hardware there in terms of an ASIC or different types of forwarding memory and what not but there's also a lot of software. If you went to a traditional vendor and said, "Okay, we're now going to make the network software defined." They're like, "Well, we have a huge software team. We actually write a lot of software, what does that mean?" The definition that I like and the one that the ONS supports, and the ONS supports it based on that is there's actually a controller involved. There's a remote controller making and receiving calls down to the individual switches. What those RPC calls are, are they OpenFlow, is it OpenFlow like? I think that is almost an irrelevant implementation detail. That is the idea of providing some sort of abstracted view of the network via a hierarchy.

[Art] The interesting thing about that definition is, is I just wonder what is the line between management, automation, orchestration platform and a controller?

[Rob] There's a critical difference, I apologize if I get too technical. Much of those configuration things will actually say configuration parameters in the individual switches but aren't actually ... Those might indirectly affect forwarding decisions, whereas in my mind a controller actually affects the forwarding decisions directly by actually populating the forwarding memory directory. The analogy that I always make is if you look at a multi-linecard switch, these are the big iron boxes that everybody has. They almost always have a couple of supervisor modules, some linecards, and some fabric backplanes. If you look at the protocol between those supervisor modules and the linecards, there actually is a protocol inside that box for if a new route comes in, how do we program the linecards. I call that protocol closed flow because its just like OpenFlow, the supervisor cards are just like controllers. It's the idea that you could do this in at least a somewhat open way, that's the thing that's fundamentally different. It's not architecturally different, it is just that something that was closed is now open, that's the difference.

[Art] I think that's, especially for the people who are just getting into OpenFlow, or just getting their first hands on experiences with them; that's a really powerful analogy. In networking, we are perhaps the most risk averse of a lot of people who work in IT because everything goes down when the network goes down. No amount of high availability you build into your important app matters when the network is down. We are familiar, though, if you've been in networking with these big chassis switches, we've used them a lot for a lot of years. If you look at like you said in your point how those work, you have the forwarding decisions and essentially the forwarding database managed by the supervisor and it, especially in modern chassis where you're using distributed forwarding it's going to cache essentially the decision engine on each individual line card. That line card goes ahead and makes the forwarding decisions assuming it has an entry.

When you take OpenFlow, it's the same thing minus the chassis, just like you said. I wanted to highlight that point. Anyone who's feeling uncomfortable with the idea of OpenFlow, it really is a very comfortable dynamic. It's a very similar type of ... There's a lot of new things about it, but there's a lot of comfort points and ways that's it's similar to traditional architecture as well for those getting into it. Do you have any other guidance for people who are just getting into OpenFlow to come to wrap their heads around the significance and how it'll impact their operations?

[Rob] Another big one that people talk to me about is support. People get a little bit concerned about support with this disaggregated model where you might buy the hardware from one company and the software from another. I tell people, "Talk to your server people. Ask them how they buy their servers." The server people will have the same problem and it's not a problem. That is to say, you can, if you want a single vendor you can to pay for that. If you want to buy both from the channel, you can do that. You want to buy it from a reseller, you can do that. If you want to buy direct from the hardware manufacturer and have them support the software, that also works as well. Everyone is running around saying everything is different, the sky is falling. I actually spend a lot of my time trying to say, actually, this is pretty much the same thing we've been doing already just only networking is a bit different.

You mentioned briefly, everybody is worried, network engineers are very conservative because when the network goes down everything goes down. That is the strictly the function of the extremely tight artificial coupling of how the software is set up. If we actually build networks the way that we build distributed web applications, we wouldn't have that problem. That's a lot of what the Big Switch is trying to do, is actually build these things a little bit more like distributed web applications so that if you lose a piece, it's not that big a deal.

[Art] That's kind of core to the spirit of the architecture. It seemed like for a long time people would try to engineer individual components to never, ever, ever fail no matter what. Somewhere along the ways we got into web-scale and we realized individual components are always going to fail. We have to build an architecture that can tolerate a lot of failure and still keep on running without a hitch.

[Rob] This is exactly the same transition that people went from in moving both from mainframes to PCs, and really from supercomputers to data centers. If you look at the architecture of a modern data center, people are starting to build them effectively exactly like supercomputers. People talk about things like pods, and that would be one supercomputer node. The idea of moving from hardware that can never fail to hardware that we know is going to fail with some probability unless we write better software, it is a tough pill to swallow but it's one the industry has swallowed a couple of times. I have a lot of faith this is going to be reality.

[Art] In 2015, SDN is going to become a lot more accessible to a lot wider range of audience because solutions like Big Switch are becoming more mature, easier to adopt. You've got other things that are finally really becoming within enterprise grasp this year, NSX for example, and other NVO solutions. I think this is going to be a year where a lot of rubber hits the road. I'm curious, what do you expect to see from the fallout of some of that stuff this year?

[Rob] I definitely think if you look at things like the traction that NSX is getting, you'll actually see a fairly big tussle in the underlay space. It's actually a use case that Big Switch is looking at fairly closely. We provide essentially a managed physical fabric that you can overlay over the top of us, so we can actually be the underlay to NSX's overlay. That's an interesting space. At the same time, if you look at some of the recent work that they've done, OVN they're calling it, so they're actually seeing a commoditization push that's already happening in the overlay space where you get folks like Midokura, you've got folks like PLUMgrid that are open sourcing and actually really trying to go after NSX in the only way that they can. Which is, let's build it more open.

[Art] OpenStack today largely uses a non-SDN framework. There's Neutron available, a lot of production implementations aren’t using that. If you go to look at Neutron, I think there's this thought like, "Hey, I can go get OpenStack and I can deploy open source SDN with that." That's not exactly a reality today particularly with overlays. If you look at what's available from an open source overlay perspective, it's pretty much open vSwitch, which doesn't have much of a framework yet, they're working on that. Then you have some things more comprehensive like NSX and Plum Grid but there hasn't been this open source complete NSX like framework for NVOs. I'm familiar with at least three different products that are going to release NVO and physical OpenFlow as well this year for open source with OpenStack communities. That'll be an interesting inflection point once there are more NSX competitors of similar breadth available open source I think this year.

[Rob] I'm really happy with where Big Switch is from a products and a positioning standpoint because of that. I think the whole NVO market ... Big Switch actually historically, this is pre our pivot, had an NVO product. The joke is, there's an argument about how big that pie is. I am actually convinced it's not as big as people think it is. It's clear that there's going to be a lot more people trying to get a bigger slice of that pie. That’s why I'm happy that Big Switch went to the physical fabric side.

[Art] If you look at VMware, what is their biggest strength? Their biggest strength is that everybody in enterprise and their dog has vCenter sitting in there waiting to be potentially upgraded. If you're VMware and you're thinking, "I want to sell private cloud technology to all these people that have this stuff." It's not really necessarily in their business interests, per se, to say "Hey, everybody, to buy my next piece of software all of you need to go out and buy an entirely new, and different type of physical network to go underneath." On the one hand, there's a lot of technologists who believe in NVO technology very passionately and I'm not slighting the technology at all but I do think there is something to be said that for certain vendors there definitely is a business interest there where using an overlay SDN versus using an SDN underlay could potentially conflate their motivations.

[Rob] Absolutely. Now at the same time, if you look our typical customer. They're actually, they've got five to ten new projects in the pipe that all require new hardware, like new switches. We're not doing rip and replace anymore, take your legacy stuff, leave it where it is. Our stuff will get slotted into the row next to it and because they've got a Hadoop project, an OpenStack project, maybe they're doing something with VDI or something like that. It's not so much a rip and replace, and honestly, that's just not a reality of modern data centers. Modern data centers the lifetime of years is probably five years. There's so many people who are expanding so quickly, throwing out all their gear that's 3+ or 5+ years old, depending on the company. I actually don't think that deploying new hardware is actually a significant problem. As long as there's interoperability, you can toggle from the old stack to the new stack, everything speaks IP, There's no problems there.

Related:
1 2 3 Page 2
Page 2 of 3
7 secrets of successful remote IT teams