This may sound like a strange question to be asking. What has OpenFlow got to do with Fibre Channel? Let me explain.
The promise of OpenFlow has been standards-based, interoperable, and open networking. Plug and play. Kind of like Ethernet in that you can take NICs or CNAs from one vendor, access switches from another vendor, core switches from another vendor and plug them together, and expect things to come up and be operational. The promise of OpenFlow is similar. Take an OpenFlow controller from a vendor, pick an application from a suite of applications based on your problem, and pick any OpenFlow switch — and things should work consistently. In theory, that is…
While there is a lot of work going on in terms of interoperability around OpenFlow, as we all know, interoperability is not the same as deployability. Today if you take an OpenFlow-enabled switch from one vendor, an OpenFlow controller from another vendor and run an application on top of that, the experience you get will vary significantly from one ecosystem of controller and switch to another. And the lack of standardized northbound APIs only ensures that the applications today get tied to the controller and, by extension, the controller-switch ecosystem. Think end-to-end lock-in.
This is, unfortunately, beginning to look like the days of Fibre Channel, where you lived by a matrix of vendors that were tested and qualified from the HBA to the fabric to the target. If you went outside that matrix, you are on your own. Unlike Ethernet, which is characterized by plug-and-play deployments between vendors, Fibre Channel was not really open and interoperable from that perspective. And it seems that we may be headed down that direction with OpenFlow.
There are two key reasons I think we are seeing this in the OpenFlow world. The first is that most switches that support OpenFlow have implemented the OpenFlow pipelines in different ways even when using the same underlying merchant silicon. This is particularly the case with OpenFlow 1.0 implementations, but we may be headed down the same path with OpenFlow 1.3 as well. For example some vendors have implemented OpenFlow using ACLs (Access Control Lists), while others use, say, the Layer 2 forwarding table in conjunction with ACLs, and yet others use Layer 2 and Layer 3 tables and ACLs. The way flow tables are managed, the way flows are distributed across tables, etc. — these all vary from switch vendor to switch vendor even when using the same merchant silicon. Consequently, the applications and end user experience can vary across Openflow switch implementations.
Many vendors have also implemented their own “extensions” to the protocol. Now, while all this provides “differentiation” between vendors, it also ensures that an application running on a controller will behave differently depending on the type of switch you are using. Which means for those looking to deploy OpenFlow, you’d have to carefully pick your ecosystem of OpenFlow switches and controllers.
The other reason I think we are seeing this ecosystem lock-in is due to the lack of standardized northbound APIs on the controller. This is a real challenge from an application perspective, because it ties the application to the controller and, by extension, to the controller-switch ecosystem. The recent fallout between the Floodlight and OpenDaylight folks doesn’t help either. The industry really is in the midst of controller wars, so to speak, and it is unclear which controller is the right platform on which to build applications. You have various open-source controllers, and then you have a plethora of commercial controllers. Each one having its own APIs and its own versions of applications which, in many cases, try to address similar problems. Again, competition is good and so is differentiation, but in this case this differentiation is definitely on the path to end-to-end lock-in, from application to controller to switch.
The good thing is that given where the OpenFlow is in its lifecycle, this is not unexpected (i.e. many new technologies go through this phase on their path to maturity and true interoperable deployments). However, it will be important for the OpenFlow community to recognize the signs of lock-in beginning to creep into the OpenFlow world and address them quickly.
There are a couple of things that I think can help address this. The first is that merchant silicon vendors could provide a reference implementation of, for example, the OpenFlow 1.3 specification over their silicon either as part of their SDK or as an application module that sits on their SDK. That will take a lot of guesswork out on the part of the switch vendors in terms of how to map the OpenFlow specification and pipeline to the underlying tables and capabilities of the silicon. It will help lead to more uniformity in OpenFlow implementations, both at a basic protocol level, as well as across the OpenFlow switches from different vendors using a common merchant silicon family. Ideally, the OpenFlow reference stack from the merchant silicon vendor itself would be something that goes through interoperability and compliance testing against a set of controllers. Some of this is beginning to happen, and that is a positive development. The more merchant silicon vendors that provide this compliant reference stack, the more consistent a user’s experience can be when using switches based on merchant silicon.
The other development that can help fulfill the promise of open networking is convergence on a set of northbound APIs and, more importantly, the underlying data model for those APIs. This will help application portability. More significantly, given the uncertainty around knowing which OpenFlow controllers will be around a year or two years from now, it will help address end-user concerns regarding picking winners in the controller wars and, by association, the applications for their deployment. The OpenDaylight effort brings this promise, given the broad industry backing behind it. If the OpenDaylight project moves rapidly it has the potential to become the de facto standard when it comes to northbound APIs, which could provide a positive impetus to the industry. However, it remains to be seen how quickly this initiative moves forward on its promise and delivery.
And until then, many folks in the industry are playing around with the technology, getting to understand it, but when it comes to going live with deployments, many are taking a wait and watch approach. And who can blame them? Who would want to pick an end-to-end solution and get locked into it, only to see the industry move in a different direction?
Check out more from Contributors on SDNCentral:
- Truly Successful DevOps Requires a Change in Culture
- The Advent of the Infrastructure Platform
- The Role of Abstractions in DevOps
- Why NFV Will Be a Reality Soon
- Clearing Up Common Misconceptions About DevOps
CONTRIBUTED ARTICLE DISCLAIMER:
Statements and opinions expressed in articles, reviews and other materials herein are those of the authors; the editors and publishers.
While every care has been taken in the selection of this information and reasonable attempts are made to present up-to-date and accurate information, SDNCentral cannot guarantee that inaccuracies will not occur. SDNCentral will not be held responsible for any claim, loss, damage or inconvenience caused as a result of any information within this site, or any information accessed through this site.
The content of any third party web site which you link to from the SDNCentral site are entirely out of the control of SDNCentral, and you proceed at your own risk. These links are provided purely for your convenience. They do not imply SDNCentral’s endorsement or association. The copyright and any other intellectual property right any third party content belongs to the author and/or other applicable third party.