..which includes the session on NSX Troubleshooting methodology that I presented.
To find my session called “NET5488 – Troubleshooting Methodology for VMware NSX”, search for “NET5488”. Both US and Europe versions are available.
In this session, I walk through how NSX-v’s VXLAN-based logical switching works, what commands you can use to see what’s happening under the covers, and how to make sense of what you’re seeing. This should provide a good base for your troubleshooting practice.
(Hat tip to @ericsiebert and @scott_lowe for the info that sessions are now online, free for all)
It is fairly common for VMs to talk to SMB / NFS shares hosted by physical storage arrays, or send their backups to physical appliances.
In this blog post, we’ll have a look at connectivity options for this use case.
VMware NSX for vSphere has been shipping beta support for hardware VTEPs since version 6.2.0, with General Availability (GA) coming in the next few months. With this in mind, I thought it would be useful to provide an overview of HW VTEP’s use cases and considerations.
Nested ESXi is a staple of resource-strapped labs. There is, however, a little something that’s worth keeping in mind when using NSX-v / VXLAN.
For a little while, I was bummed by poor peformance of a Windows 8.1 guest running in Fusion 8.1 on El Capitan. Not sure which one of these was the main contributor, but the experience was painful – switching between apps was taking few seconds, which just didn’t seem right considering it’s running on a current MacBook Pro with plenty of RAM, CPU power, and a fast SSD.
Long story short – looks like I’ve found a cure. This solution worked like magic for me:
For those not willing to visit the source – I added the following to the vmx file:
mainMem.backing = "swap"
scsi0:0.virtualSSD = 1
MemTrimRate = "0"
sched.mem.pshare.enable = "FALSE"
MemAllowAutoScaleDown = "FALSE"
Looks like these settings have originated from the following post:
Happy computing! 🙂
#NET5488: Troubleshooting Methodology for VMware NSX (for vSphere, v6.2):
I designed the pack to be read, so hopefully you find it useful without the recording. For those who have attended VMworld, both US and Europe sessions were recorded. You should be able to watch them on vmworld.com.
Not sure which one (US or EU) went better, to be frank. 🙂
I know that NSX-v 6.2 just hit, but I already saw a few instances when somebody installing NSX in multi-VC environment deploys first NSX Manager appliance from OVF, and then proceeds to clone fully deployed appliance for Secondary instances.
The problem with this approach is that UUID for NSX Manager is generated at the time of OVF deployment, which means all copies of NSX Manager created in the fashion above may have the same UUID.
This fact will come to bite when it’s time to join Secondary NSX Managers to the Primary – your Secondary NSX Managers will fail to import the Controller cluster. It is because at the time of import, NSX Manager checks UUID of the NSX Manager that deployed Controllers, sees its own UUID, but then fails to find any records of these Controllers in its own database (duh).
You can check whether your Secondary NSX Manager has successfully imported your Controllers by connecting to that NSX Manager’s VM via SSH, and running the following CLI command:
P.S. I hope you did read the NSX Installation Guide, and found this post completely redundant. 😉 What I’ve just talked about is covered in the 4th paragraph right here.
Update: post updated with clarification that cloning process may result in duplicate UUID, not necessarily will. One case when it will is when NSX Manager VM is cloned in vCloud Director that does not have CloneBiosUuidOnVmCopy set to false. Big “Thank You” and full credit for this clarification goes to @rbudavari.