NSX for vSphere maintains a single set of Distributed Firewall rules per NSX Manager. By default, all active rules are applied to all vNICs of all Virtual Machines running on all clusters within the NSX Manager’s domain.
This isn’t always desirable; two cases that come to mind are: (a) large sets of rules, not all applicable to every single vNIC of every VM; and (b) overlapping IP addresses.
To clarify the second case: while in DFW you can use vCenter objects as rules’ Source and/or Destination, “under the covers” DFW always translates those objects into address sets, populated with IP addresses of those objects. So in the end the allow/deny decisions are made against IP addresses. This means that if a given IP address is used by more than one VM (think a multi-tenant environment, for example), there’s a clear risk of unintended firewall action.
The “Applied To:” field in DFW rules can be used to avoid this problem. That’s pretty much it. If you feeling adventurous, below the fold is a small walk-through demo of what I’m talking about above.
We’ll use the “ABC Medical” lab environment (that anyone who ever took VMware’s NSX or vCNS HOLs or training would be familiar with).
1) We start at the point where we have three logical switches – Web-Tier–01, App-Tier–01, and DB-Tier–01, and VMs called “web-sv–01a” and “web-sv–02a” attached to the Logical Switch (LS) “Web-Tier–01”. The “ABC Medical” lab consists of three clusters – Management and Edge, Compute Cluster A, and Compute Cluster B. The two VMs web-sv–01a and web-sv–02a reside in the Compute Cluster A. All clusters are members of the same Transport Zone.
2) Let’s clone the two VMs, and put them into the Compute Cluster B. We’ll call them “web-clone1” and web-clone2″:
3) Let’s attach them to the “DB-Tier–01”, and power them on.
The point of this step is to create VMs with the same IP addresses as existing VMs web-sv–01a and web-sv–02a, but connected to a different Logical Switch. Think of it as a different tenant in multi-tenanted deployment, or an application deployed from a template that includes IP address assignment, with individual copies of application sitting behind a NAT.
4) Let’s confirm that we can ping between both original VMs and cloned VMs, and confirm that they are in fact different:
Notice different MAC addresses associated with the 172.16.10.12.
5) Let’s create a firewall rule with Src = “WebTier–01” and Dst = “Web-Tier–01”, action = Drop:
6) …and ping again:
So, why is this the case? Let’s take a look under the wraps on the ESXi hosts.
First, let’s check out attached NICs on the host where web-sv–01a is running:
The output of the command shows filters for each vNIC of each VM running on the host where the command is executed.
To be able to see the firewall rules applicable to a particular VM, for example, web-sv–01a, we’ll need to find out the Filter Name corresponding to the vNIC of the VM we’re interested in.
To do that, use command “summarize-dvfilter”, which will include VM name and World ID that you will be able to correlate with the information above.
Ok, now we know which filter corresponds to which VM/vNIC; so let’s then have a look at the filter contents:
Hmm. We know that db-sv–01a isn’t connected to the Web-Tier–01, but its filter has the drop rule at the top none the less.
Let’s have a look what the addrset “ip-virtualwire–2” expands to (if “-i” isn’t used, command will show all addrsets for given vNIC):
As expected, addrset is the same for both vNICs, and contains a list of IPs of VMs attached to the LS “Web-Tier–01”.
The situation is the same on the other host in “Compute Cluster A” where VM web-sv–02a is running; but it is also the same in the “Compute Cluster B”, where there are no VMs in the “Web-Tier–01” LS:
This is our clone VM, which has the same IP address, and while isn’t in the Web-Tier–01, is the subject to the firewall rule dropping the traffic.
So, here’s our problem.
The way to fix it is to add “Applied to” to our firewall rule:
Note the “Web-Tier–01” in the “Applied To” column.
Hint: the “Applied To” field is not shown by default. To make it appear, click on the icon that looks like a calendar in the top right corner of the screenshot above, and tick the corresponding tick box.
So what’t the result? As expected, we can’t ping between the servers web-sv–01a and web-sv–02a, but now can between the web-clone1 and web-clone2:
and on the ESX hosts:
Rule is gone from the “Compute Cluster B” hosts:
and from the vNIC not attached to web-sv* VM:
but not from the NIC of the web-sv VM:
Thanks for reading! 🙂