Since Nicira’s coming out of “stealth” earlier this week, a lot has been said about SDNs, from hand-wringing to nice balanced analysis, however the common topic that’s been kicked around a lot is “what does it mean for the traditional networking”.
Here’s my feeble attempt on this: I think that both SDNs and traditional networking (let’s call it “MPLS” for the sake of this post) have their weak and strong sides, and could potentially be combined in a meaningful way to reap the benefits on both sides of the fence.
Michael Bushong succinctly put it in his excellent post:
“Nicira’s solution demonstrates that as an industry we should be adding value with the network, by becoming more dynamic and integrated with the overarching service that pays for the pipes…”
So, what might this mean? How about an ability for things like Open vSwitch, instead of simply establishing GRE tunnels to other OVS instances over underlying IP transport and “hoping for the best”, actually signal the transport network its connectivity requirements (yeah, hello, RSVP), say desired bandwidth, class of service, protection level, minimum SLG, etc.?
Imagine the world where your OVS can ask its IP/MPLS transport uplink things like “I need a protected link with a maximum RTT of 5 ms that can carry 2Gbps of important traffic, please”, and where transport network would take care of it, by establishing a connection, reserving the necessary bandwidth, monitor connection’s performance and re-route it, if it starts trending towards breaking the 5 ms RTT SLG?
Imagine where you could ask your transport network for a temporary allocation (yeah, hello, “on-demand service”; sounds quite cloudy, eh?) of either high priority or large amounts of bandwidth (or both), and where it could not only reserve and provide it to you, but also send you a utilisation report that can be used for a chargeback later? What about asking transport to provide a circuit with a “minimum cost” (as in monetary cost)?
Attentive reader could say, “but we don’t care about this inside a single mega-DC”, and will be partially right. However, number of Service Providers with mega-DCs in the world is quite small, compared to the number of smaller SPs, targeting a huge Enterprise market. And for them, it could be a real benefit to have something like that.
And even in a Mega-DC, not all traffic flows created equal (e.g., management traffic vs. CIFS/NFS), and benefits could be reaped from a constructive dialogue, as opposed to “we don’t care what you, lowly transport, is doing there, and we’ll just tunnel all over your sore last-century ass”.
P.S. To clarify, in the scenario above, potentially more than one tunnel might be in use between the same pair of vSwitches, to accommodate different traffic type’s requirements of VMs served by these vSwitches.
P.P.S. Add pictures of unicorns and rainbow poo to your taste. images.google.com is your friend, just don’t forget to wear your DNT+ tinfoil hat. 🙂