OpenFlow, the southbound API between the controller and the switch, is getting most of the attention in the current SDN hype-fest, but the northbound API, between the controller and the data center automation system (orchestration) will yield the biggest impact for users.
SDN has the potential to be extremely powerful because it provides a platform to develop new, higher level abstractions. The right abstraction can free operators from having to deal with layers of implementation detail that are not scaling well as networks increasingly need to support “Hyper-Scale” data centers. If the controller becomes a single point of management that automates the process of telling individual switches, ASICs and TCAMs what to do, then a (southbound) wire protocol is required for this purpose. For more than a decade various control plane implementations from commercial vendors have provided a programmatic means for a control plane process or device to setup the behavior of the data plane implementation. These were proprietary and as such did not offer the potential to improve multi-vendor interoperability or drive down costs by catalyzing competition. However, they demonstrated that this problem can be solved in multiple different ways and adequately support all of the diverse standards based protocols for layer-2, layer-3, MPLS, and everything else the IETF, ITU, IEEE, et al could dream up. This is a solved problem. OpenFlow is immature and evolving and can’t yet support many (most?) of the protocol technologies widely used in networks today. It’s not clear that it’s the best southbound protocol between the control plane entity and the data plane today, but seems very likely to get there some day due to the investment being made at ONF and elsewhere. As long as it performs the task of setting up the network elements to implement the will of the controller nobody on the operator side should care how it does so. Its role is to live in a plumbing layer and be one of those services that we don’t know exists (as long as it is working correctly).
In contrast, the northbound API interface between the controller and the users, or more likely their automation systems, is something that will determine how painful or wonderful the experience of operating the network is going to be. So far, there is little or no talk of standardizing an abstraction and the resulting APIs to expose it upstream. There are some emerging examples of proprietary implementations that help frame the conversation. NEC has provided perhaps the first shipping commercial attempt to abstract the network using OpenFlow, with a graphical user interface that allows the network engineer to drag and drop virtual routers and virtual bridges to build a network. It is a step in the right direction, moves us way above the IOS level of abstraction at the CLI, and deals with a topology abstraction that will need to become visible if automation fails and debugging is required. However, for the application developer or cloud manager even concepts like routed versus bridged need to be abstracted away from the basic provisioning interface. Closer to where this needs to end up is the approach recently announced by HP and being implemented by some stealthy startups I can’t pre-announce due to NDA. Here the abstraction level is about workloads (applications) and policies (requirements for behavior in a network that supports them adequately). Anyone with knowledge of the application’s performance and security needs should be able to describe this in protocol-free, high level terms and click the OK button to get controller intelligence to translate that to a network implementation. Now we’re talking.
If and when such solutions ship we will see the next evolutionary requirement. In a cloud environment, the network is but one of the three legs of the stool, complementing the compute and storage components. Now the single point of management sought is not a network manager, but a cloud manager. Conceptually this sits on top of a compute controller, a storage controller and a network controller, that all expose abstract concepts via standard API’s. One doesn’t use a proprietary GUI on a compute manager to spin up a new VM if they have a cloud orchestration manager. One shouldn’t do so on a network virtualization console to provision for a new application either. There is some good activity progressing around standardizing the APIs between the cloud orchestration systems and the compute and storage subsystems. VMware’s management tools are a bit of a de facto standard within their captive users and the open-source hypervisors have documented automation APIs as well. There is virtually no public discussion about a standard network abstraction and corresponding standard API that the orchestration builders can utilize. Certainly making progress towards standardizing this will be politically difficult, yet that’s exactly the kind of thing the ONF should address to deliver real value and make the network an equal partner in the cloud automation revolution. OpenStack’s Quantum API is not even close to aiming at the high level of network capability abstraction required. I frequently hear operators say “we’re going OpenStack so we need a Quantum API implemented by the SDN controller”, but this won’t get them to supporting workloads with the big cost savings of reduced complexity.
The orchestration world today couldn’t be further from standardization. On the one hand, there are dozens of disparate custom systems built by the biggest DC operators for competitive advantage in their application verticals, which are a closely guarded secret. Then there are many different open source and proprietary efforts being built to market to enterprise cloud builders without the means to build their own from scratch. If a new network virtualization layer is to emerge and enable a powerful new level of abstraction in the cloud orchestration solutions, the language for that machine-to-machine conversation needs to be well understood and friendly to those who must expose a GUI above it. We in the network community need to figure out how to take the conversation from the southbound wire protocol to the northbound automation protocol. The network industry needs to get the same kind of buzz and application of innovation horsepower applied to this problem that is currently targeted at OpenFlow. Courageous early adopters and powerful vendors have started the drip of revenue in this area, but the firehose promised by SDN hype won’t turn on until the massive savings in operating expenses are achievable using the next generation of cloud automation tools.
Check out more from Contributors on SDNCentral:
- How Message-Bus Event Processing Created an Agnostic SDN Solution
- Talking Turkey: A Snapshot of our Network Virtualization Pilgrimage
- Using Overlay SDN to Integrate Public Cloud Environments
- NFV, NV, and SDN: Past the Vernacular, Getting to the Spectacular
- There’s a Context for That: How Smartphone Apps Relate to SDN and NFV
CONTRIBUTED ARTICLE DISCLAIMER
Statements and opinions expressed in articles, reviews and other materials herein are those of the authors; the editors and publishers.
While every care has been taken in the selection of this information and reasonable attempts are made to present up-to-date and accurate information, SDNCentral cannot guarantee that inaccuracies will not occur. SDNCentral will not be held responsible for any claim, loss, damage or inconvenience caused as a result of any information within this site, or any information accessed through this site.
The content of any third party web site which you link to from the SDNCentral site are entirely out of the control of SDNCentral, and you proceed at your own risk. These links are provided purely for your convenience. They do not imply SDNCentral’s endorsement or association. The copyright and any other intellectual property right any third party content belongs to the author and/or other applicable third party.