With Hardware VTEP being implemented in, well, hardware, how things work depends on capabilities of the underlying chipset. This means that when we design solutions using these products, we need to keep these capabilities in mind and configure things accordingly.
In this short post I’ll cover a situation we’ve encountered at one of our customers where things “should” have worked but didn’t, and what was the reason for that.
Some time ago AWS Partner Netwok Blog published a couple articles that cover AWS Virtual Private Cloud (VPC) networking in great detail, with a bunch of links to further info. Best of all, they were written by a networking person, for readers with networking backgroun in mind.
While we all know how to use your favourite search engine, a little promotion sometimes goes a long way. 🙂 So, here they are:
Amazon VPC for On-Premises Network Engineers, Part One
Amazon VPC for On-Premises Network Engineers, Part Two
Happy reading! 🙂
Last time we’ve talked about the concept of Infrastructure as Code (IaC), and introduced two most prominent tools in the space, AWS CloudFormation and Hashicorp Terraform.
In this post, we’ll have a look at an AWS CloudFormation template that you can use to deploy a cluster of 2 x Brocade Virtual Traffic Managers with WAF into a new AWS VPC; what makes up that template; and how it all works.
We will take things quite slowly here. Some basic understanding of automation and/or scripting/programming will help, but not strictly necessary.
If on other hand you’re already well-versed in AWS CloudFormation but still interested in automating deployment of Brocade Virtual Traffic Managers in AWS, feel free to jump straight to the GitHub repo, and optionally read the vADC EC2 Instances section below.
Please note that this is work in progress and the code you’ll find there has no official support at this time, but rest assured, it is coming! 🙂
This post is next in series on using HW VTEPs with NSX-v. You can find earlier posts here: 1, 2, and 3.
Today we’ll look at a couple of choices you’ll need to make when deploying Brocade’s HW VTEP, and then check if our configuration is correct before linking it with NSX-v next time.
In a couple of previous blog posts, we’ve looked at the use cases for HW VTEPs. Now, let’s start digging a bit deeper.
In this instalment, we’ll have a look at what things you need to think about when planning your Hardware VTEP deployment. While I’ll be using Brocade VCS as the HW VTEP for this post, some of this info should be applicable to other Vendors’ solutions.
..which includes the session on NSX Troubleshooting methodology that I presented.
To find my session called “NET5488 – Troubleshooting Methodology for VMware NSX”, search for “NET5488”. Both US and Europe versions are available.
In this session, I walk through how NSX-v’s VXLAN-based logical switching works, what commands you can use to see what’s happening under the covers, and how to make sense of what you’re seeing. This should provide a good base for your troubleshooting practice.
(Hat tip to @ericsiebert and @scott_lowe for the info that sessions are now online, free for all)
It is fairly common for VMs to talk to SMB / NFS shares hosted by physical storage arrays, or send their backups to physical appliances.
In this blog post, we’ll have a look at connectivity options for this use case.
VMware NSX for vSphere has been shipping beta support for hardware VTEPs since version 6.2.0, with General Availability (GA) coming in the next few months. With this in mind, I thought it would be useful to provide an overview of HW VTEP’s use cases and considerations.
NSX for vSphere supports three VXLAN Control Plane modes:
- Multicast (described in Section 4 of RFC 7348);
- Hybrid; and
None of these is “simply better” than others; each has it’s positive and negative sides. In this post, I’m covering how each mode works along with some of those negatives and positives, to hopefully help you can make a better informed choice for your circumstances.
As part of NSX preparation for logical switching and routing, it is necessary to define at least one Transport Zone (from here on – “TZ”).
It is obvious from the UI that TZ configuration includes default VXLAN Control Plane mode and a list of ESXi clusters; but what does it actually do?
Let’s find out.