Picking right abstrations for your Network Virtualisation solution

Intro

In my travels around the internets, I got increasingly frustrated by the fact that most descriptions of SDN and network virtualisation solutions dive right down into the specifics of how stuff works. While I’m all for the details, I feel that there is an opportunity here to step back a bit and talk about the abstractions, which is what the end-user will see and deal with. For this post, (and yes, by association) I will talk about the abstractions used by perhaps the most mature network virtualisation solution on the market today.

And yes, this means that I won’t be talking about how that stuff works. I promise. :)

Update 4 Aug: Post lightly edited for clarity based on great input from T. Sridhar – Thank You!

Taking a plunge: a pool or a deep ocean?

Introduction of virtualised networking can be seen as a good opportunity to develop and adapt new, better basic network building blocks. While this is true, it is also fair to say that a double transition (introduction of virtual networking, and switching to new constructs at the same time) may in many cases turn out to be too much of everything (risk, learning curve, operational complexity, etc.), which could result in slower adoption and increased time to benefit.

An approach with less such risk is to use the familiar abstractions, and enhancing them. For example, a virtual machine, while looking very much like a physical server to the OS and applications running inside it, can be “paused” while running and “un-paused” later without losing any work; moved live to a different host or even location; cloned; given more CPU and memory resources and allocated additional “interface cards” without doing any actual physical changes, and so on.

Network virtualisation can follow similar approach – recreate and enhance the familiar constructs, making them easier and quicker to create, change, and operate.

NOTE

The lunch is never entirely free, of course. Network virtualisation, just as server (and storage) virtualisation, does take something away (direct visibility of hardware, some performance) before giving something in exchange. However, the net value created by a successful virtualisation undertaking is always positive; otherwise it wouldn’t be called successful, right?

Case in question: Nicira Network Virtualisation Platform (NVP) constructs

The NVP team has chosen the path of enhancing the familiar. The resulting solution is built around the following constructs:

The sections below will draw parallels with the physical world for each of the above, and highlight any notable differences and enhancements.

Logical Switches and Logical Switch Ports

An individual Logical Switch can be thought of as a single Ethernet segment that can reach into all devices where an OVSDB endpoint, such as an instance of Open vSwitch (OVS), is present; for example hypervisor hosts, physical hosts, Top of the Rack (ToR) switches, virtual appliances running in private or public clouds; and even mobile endpoint devices. The only prerequisite is that there is IP connectivity between them; not even necessarily in full mesh fashion.

Logical Switch Ports are similar in function to their physical counterparts, but lack their PHY aspects (such as duplex and speed settings). The following functions are implemented at the Logical Switch Ports:

  • Access controls (Security Profiles);
  • Traffic conditioning (QoS);
  • Link status monitoring;
  • Port statistics; and
  • Port mirroring.

NOTE

Physical network interfaces of devices where a Logical Switch is present (such as a NIC on a server or physical ports on a ToR) never directly form part of a Logical Switch. They can be connected to Logical Switch Ports via Attachments, which are discussed further below.

Functionally, a Logical Switch does exactly what one would expect, which is provide any to any L2 connectivity between its Logical Ports, subject to any configured access control and traffic conditioning policies.

Logical Switches and Logical Switch Ports are constructs created in software. This gives Logical Switch Ports and their associated policies and port statistics an ability to “follow” endpoints connected to them, with no operator action required. For example, if a Virtual Machine (VM) that is connected to a Logical Switch Port is moved to a different physical host, all access control and traffic management policies will follow the VM to its new place, and all traffic counters associated with the Logical Switch Port connected to that VM will be preserved and continue to be updated.

VLAN support in Logical Switches

Because multiple Logical Switches, Logical Ports, and Virtual Interfaces in VMs and OVS hosts can be instantiated in software, there is no need to support trunking and associated complexity of dealing with sub-topologies. Logical Switches do not support sub-segmentation into further multiple logical connectivity topologies, such as VLANs. An individual Logical Switch corresponds to exactly one L2 segment, similar to a single VLAN.

This approach helps to untangle individual logical application topologies from the networking infrastructure that supports them. Each Logical Switch along with its Logical Ports can be independently brought up, changed, and torn down without creating unnecessary dependencies for other applications.

NOTE

There is a class of workloads with genuine need for VLAN support on their network connections. Most notably, these are the workloads that were designed to run on bare metal servers, such as hypervisors and some service appliances, for example, software firewalls and traffic managers that use VLAN tags to separate network access to contexts or policy domains.

They can still be virtualised and benefit from network virtualisation, but their scalability is likely to be limited by the number of virtual interfaces that will need to be created on the VMs hosting these workloads.

Logical Routers and Logical Router Ports

Logical Routers can be best thought of as the “old-school” physical router devices, converted into a virtual form factor. Their primary purpose is to perform Layer 3 forwarding functions between IP segments associated with the Logical Switches attached to them. Just as with Logical Switches, it is possible to connect to a Logical Router Port at any location with an OVSDB endpoint that is reachable via IP.

Logical Router Ports provide the means of connecting Logical Routers into the logical network topologies, and implement the following functions:

  • Network Address Translation (NAT); and
  • Port statistics.

As with the Logical Switches, there is no need for them to support segmentation and multiple routing tables. An individual Logical Router serves a single routed domain.

Logical Routers are completely separate entities from Logical Switches, which means there are no constructs similar to the “Switched Virtual Interfaces”, where an L2 segment has an “integrated” L3 gateway function. The only way for a network packet to reach a Logical Router Port and its associated L3 address and forwarding functions is via a Logical Switch Port, connected to that Logical Router Port using an Attachment.

This connectivity model provides good balance between flexibility and complexity, and is very straight-forward to picture and reason about. All functions reasonably expected from such model are supported. For example, multiple Logical Routers can be connected to the same Logical Switch and vice versa. Or, a Logical Switch along with its connected VMs can be “re-parented” to a different Logical Router by simply removing its Attachment to the old Logical Router and creating an Attachment to the new one.

Logical Routers also take advantage of virtualisation:

  • They can be configured for High Availability without the need to manually deploy, place, connect, and configure multiple individual Logical Routers;
  • They can be easily replicated along with their configuration (think DR scenarios);
  • Their interfaces, or Logical Router Ports, can be easily created and destroyed, as needed;
  • Their performance and routing table capacity can be scaled up or down by changing host resources allocated to Logical Router instances, in most cases without causing an outage;
  • They can also easily benefit from increased processing power of newer x86 servers and CPUs.

NOTE

Individual workload VMs cannot be directly connected to Logical Router Ports. They can only be connected to Logical Switch Ports. While it is possible to connect a physical server directly to a physical router port, in practice it is rarely done for several reasons, such as higher cost of router interfaces, and potential recabling work involved should other endpoints need access to the same router interface later.

Virtual Interfaces (VIFs)

Virtual Interfaces, or VIFs, perform the same function as physical interfaces on physical servers – they connect virtual machines to Logical Switch Ports using Attachments.

As mentioned in the Logical Switch Port section above, VIFs do not support VLAN tagging because of the one-to-many arrangement it would create, which is not necessary in most cases where network virtualisation is used.

Attachments

Attachments can be thought of as “logical cables”. An Attachment is used to connect:

  • A VIF to a Logical Switch Port;
  • An external L2 segment to a Logical Switch Port, via an L2 Gateway Service;
  • An external L3 gateway to a Logical Router Port, via an L3 Gateway Service;
  • A Logical Switch Port to a Logical Switch Port in a different NVP domain (Multi-Domain Interconnect); and
  • A Logical Router Port to a Logical Switch Port.

The role of an Attachment is to track the objects it connects as they go through their lifecycle, and keep them correctly connected. For example, a virtual machine could be migrated to a different host that may be running a different hypervisor type, and its Attachment will take care of making sure it still works as expected.

Conclusion

The constructs described above are the cornerstone abstractions of shipping Nicira/VMware’s network virtualisation solution.

It is still very early days, and I don’t doubt there will be a lot of fine-tuning and tweaking to how stuff works in future, but for now – that’s it. Hope you find this useful.

About these ads

About Dmitri Kalintsev

Some dude with a blog and opinions ;) View all posts by Dmitri Kalintsev

3 responses to “Picking right abstrations for your Network Virtualisation solution

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: