
Summary Bullets:
- Standards are great for ensuring interoperability when the requirements are well understood.
- The requirements needed to support applications leveraging SDN are not well understood and standardization will inhibit innovation.
SDN northbound APIs don’t need standardization – at least not at the functional level where command and control semantics live. Like others, I think SDN is far too early in its development to warrant standardization at a functional level. SDN would benefit from a standardized architectural approach such as SOAP or REST, which describe different programmatic approaches to interconnecting services, because those are software architectures that are familiar to application developers. In order to generate and maintain momentum for SDN innovation, there must be as few barriers to application development as possible.
When industry standards are developed and adopted, the impacts of early decisions are felt for a long time after the standards are deployed, and that either enables or inhibits further protocol development. For example, the lack of loop control in the first Ethernet specification (and then not addressing it in later specifications) caused protocols such as spanning tree to be developed and restricted Ethernet networks into single tree designs. If the various groups working on what became Ethernet had a crystal ball and could foresee the impact, they might have addressed the problem earlier. The fact is, even the best intentioned, brightest people working on standards cannot foresee every possible outcome.
Standardizing the southbound interfaces from an OpenFlow controller to a switch is simpler than standardizing the northbound interfaces because what’s required of the hardware is definable. If the traffic matches some pattern, then send it along a pre-defined path. Creating southbound protocols is not trivial by any means, but the problem space is, for the most part, well understood. Even so, the changes to the OpenFlow protocol from version to version are having an impact on vendor support, since protocol changes often mean hardware changes to support the new features.
The northbound interface is far more open because the problem space is much larger. If the industry assumes that there will be one controller for a network (an HA pair or a cluster counts for one controller), then that limits what the controller can do or it overcommits what a controller needs to do. If there are three applications in a data center that want to program the network, does the controller sort out the competing application demands or is that left to some other system? Which is preferable: a smart controller that takes multiple inputs and coalesces them into a response in the network, or a controller that is largely responsible for taking commands from a northbound device and then producing the appropriate network configuration? That’s just one example. Granted, there are some actions coming from a northbound device that will be uniform across all devices, but I bet there will be many more actions that are application-specific.
Let’s not clamor for the restrictions of a standardized northbound interface until the industry – vendors and customers – figures out what the requirements are.