With Hardware VTEP being implemented in, well, hardware, how things work depends on capabilities of the underlying chipset. This means that when we design solutions using these products, we need to keep these capabilities in mind and configure things accordingly.
In this short post I’ll cover a situation we’ve encountered at one of our customers where things “should” have worked but didn’t, and what was the reason for that.
This post is next in series on using HW VTEPs with NSX-v. You can find earlier posts here: 1, 2, and 3.
Today we’ll look at a couple of choices you’ll need to make when deploying Brocade’s HW VTEP, and then check if our configuration is correct before linking it with NSX-v next time.
In a couple of previous blog posts, we’ve looked at the use cases for HW VTEPs. Now, let’s start digging a bit deeper.
In this instalment, we’ll have a look at what things you need to think about when planning your Hardware VTEP deployment. While I’ll be using Brocade VCS as the HW VTEP for this post, some of this info should be applicable to other Vendors’ solutions.
..which includes the session on NSX Troubleshooting methodology that I presented.
To find my session called “NET5488 – Troubleshooting Methodology for VMware NSX”, search for “NET5488”. Both US and Europe versions are available.
In this session, I walk through how NSX-v’s VXLAN-based logical switching works, what commands you can use to see what’s happening under the covers, and how to make sense of what you’re seeing. This should provide a good base for your troubleshooting practice.
(Hat tip to @ericsiebert and @scott_lowe for the info that sessions are now online, free for all)
It is fairly common for VMs to talk to SMB / NFS shares hosted by physical storage arrays, or send their backups to physical appliances.
In this blog post, we’ll have a look at connectivity options for this use case.
VMware NSX for vSphere has been shipping beta support for hardware VTEPs since version 6.2.0, with General Availability (GA) coming in the next few months. With this in mind, I thought it would be useful to provide an overview of HW VTEP’s use cases and considerations.
Nested ESXi is a staple of resource-strapped labs. There is, however, a little something that’s worth keeping in mind when using NSX-v / VXLAN.
#NET5488: Troubleshooting Methodology for VMware NSX (for vSphere, v6.2):
I designed the pack to be read, so hopefully you find it useful without the recording. For those who have attended VMworld, both US and Europe sessions were recorded. You should be able to watch them on vmworld.com.
Not sure which one (US or EU) went better, to be frank. 🙂
I know that NSX-v 6.2 just hit, but I already saw a few instances when somebody installing NSX in multi-VC environment deploys first NSX Manager appliance from OVF, and then proceeds to clone fully deployed appliance for Secondary instances.
The problem with this approach is that UUID for NSX Manager is generated at the time of OVF deployment, which means all copies of NSX Manager created in the fashion above may have the same UUID.
This fact will come to bite when it’s time to join Secondary NSX Managers to the Primary – your Secondary NSX Managers will fail to import the Controller cluster. It is because at the time of import, NSX Manager checks UUID of the NSX Manager that deployed Controllers, sees its own UUID, but then fails to find any records of these Controllers in its own database (duh).
You can check whether your Secondary NSX Manager has successfully imported your Controllers by connecting to that NSX Manager’s VM via SSH, and running the following CLI command:
P.S. I hope you did read the NSX Installation Guide, and found this post completely redundant. 😉 What I’ve just talked about is covered in the 4th paragraph right here.
Update: post updated with clarification that cloning process may result in duplicate UUID, not necessarily will. One case when it will is when NSX Manager VM is cloned in vCloud Director that does not have CloneBiosUuidOnVmCopy set to false. Big “Thank You” and full credit for this clarification goes to @rbudavari.
This article has been re-published as a KB: http://kb.vmware.com/kb/2122060