Let’s Terraform the vTM: Part 1 / 4
The freshly-released Pulse Virtual Traffic Manager (vTM) v18.1 comes out with a Terraform Provider for vTM. The provider ships with 100% coverage for all vTM’s REST API resources, and includes support for the API version 4.0 that goes back to vTM 17.2, and the API version 5.2 that includes all the newest features that shipped with vTM 18.1.
In this 4-part post we’ll do a quick introduction of Terraform provider for vTM, and show how it can help you support the needs of your applications.
Hopefully Terraform does not need much introduction. It is one of the most prominent Infrastructure as Code tools, that can provision a wide variety of IaaS, PaaS, SaaS, and on-prem services through the use of Providers. The role of a Provider is to take the details needed to create / read / update / delete resources specific to a particular Service (e.g., AWS), and make them available through the Terraform’s language (HCL).
The beauty of this approach is in that you can describe a collection of resources supported by any combination of the Providers using the same language (HCL). And you can mix and cross-reference these resources in one template, too.
It’s worth noting that Pulse were clearly not the first ones to realise that this approach would fit very well with treating vTM as an infrastructure that provides load balancing “services” to applications sitting behind it. At this point I’m personally aware of two independent implementations of Terraform provider for vTM, created by customers themselves: One and Two.
So without further ado, let’s dive into it.
An example topology
For this blog I wanted a very simple topology that’s fairly typical: a single domain (corp.com
) with a single property (www
), and two backends – main (/
) and API (/api
).
A cluster of 2 x vTMs sitting in front of the infrastructure that serves this provides a few basic functions:
- Terminates/offloads SSL
- Performs L7 routing based on the request Path
- Provides High Availability for the IPs that match the property’s FQDN by using Traffic IPs.

If you’d like to follow at home
I’ve created a Docker image (based on Apline Linux) with terraform itself + both versions of terraform provider for vTM installed in /usr/local/bin
. You can run a copy of it so that you can test things out as we go:
mkdir try-vtmtf && cd try-vtmtf docker run -it --rm -v $PWD:/root/try-vtmtf -w /root/try-vtmtf dk114/try-vtmtf:1.1 /bin/ash terraform --version # You should see "Terraform v0.11.7"
The docker run
command included above will start a Docker container, and mount your current directory (try-vtmtf
) under the path /root/try-vtmtf
. Any changes you make to files that reside in this directory will persist on the your main host running container. Also any changes you make to files in this directory from outside the container will be immediately visible to programs running inside (such as terraform). This means you can edit the files we’ll be working on using an editor on your host, and terraform inside the container will “see” your changes as soon as you save them to disk.
If you prefer to create your own setup, download the following:
- Terraform itself
- Terraform provider for vTM – linux64 or
- Unofficial build of Terraform provider for vTM – darwin64
The MacOS/darwin provider will come down named as
terraform-provider-vtm_v4.0.0-darwin
. You’ll need to rename it and remove the-darwin
part, so it’s called justterraform-provider-vtm_v4.0.0
, then make it executable (it’s a static binary), and place somewhere in your$PATH
.Note that the offcial instructions for installing plugins differs from this approach. I found that both work. Placing plugin somewhere in
$PATH
however has benefit of making it available to users system-wide without asking them to make their own copy.
You can continue working in another Terminal window on our template in the directory you’ve created (try-vtmtf
).
If you already have a copy of vTM you can play with – note down its management IP address, make sure REST API access is enabled (typically on port 9070), and that your machine can reach it. If not – you can get one like so:
docker run --name vTM --rm -e ZEUS_DEVMODE=yes -e ZEUS_EULA=accept -e ZEUS_PASS=abc123 -p 9090:9090 -p 9070:9070 -p 80:80 -p 443:443 --privileged -t -d dk114/pulse-vtm:17.4
This should download a Docker image with a vTM 17.4, and run it in the Developer Mode (all features on, throughput-limited to 1Mbps). It sets password for the admin
user to abc123
; change as you see fit. 🙂 You should then be able to reach it from your terraform container on the IP of your machine (172.16.0.10 in my case):
curl -u admin:abc123 -k https://172.16.0.10:9070/ # I get the response: # {"children":[{"name":"api","href":"/api/"}]}
If you got to this point successfully – sweet!
Let’s begin
Terraform works on templates; but with a little twist – when you run terraform <operation>
, it will look inside your current directory for all files named *.tf
, combine them behind the scenes, and treat them as if it was a single file. So each template will need its own separate directory; in our case the one we’ve created and changed into – try-tfvtm
.
Most templates have one or more variables
that can be set with default values, or given those values at the run time. Terraform then takes those values, combines them with the template code, and calculates the final desired state
, which is what it will work to achieve when you run terraform apply
.
Also templates are often shared between people, and to help quickly figure out what are the variables used in a template, it’s often useful to split variable definitions into a separate file. Let’s do that, and create two files: main.tf
and variables.tf
:
# Inside our try-vtmtf directory: touch main.tf variables.tf
Edit the main.tf
, and add the following block to it:
provider "vtm" { base_url = "https://${var.vtm_rest_ip}:${var.vtm_rest_port}/api" username = "${var.vtm_username}" password = "${var.vtm_password}" verify_ssl_cert = false version = "~> 4.0.0" }
This block of code will tell terraform that we have a vTM (or maybe a vTM cluster) that can be reached on https://${var.vtm_rest_ip}:${var.vtm_rest_port}
, authenticated to using ${var.vtm_username}
and ${var.vtm_password}
; that it’s using a self-signed cert (that we should none the less trust for now), and that we’ll talk to this vTM using REST API v4.0. At the time of writing this includes vTM versions 17.2x, 17.3, 17.4, and 18.1.
All these ${}
bits are variables
, and we’ll need to define them. Let’s do that. Save your main.tf
for now, edit variables.tf
, and add the following to it:
variable "vtm_rest_ip" { description = "IP or FQDN of the vTM REST API endpoint, e.g. '192.168.0.1'" } variable "vtm_rest_port" { description = "TCP port of the vTM REST API endpoint" default = "9070" } variable "vtm_username" { description = "Username to use for connecting to the vTM" default = "admin" } variable "vtm_password" { description = "Password of the $vtm_username account on the vTM" }
As you can see, two of our variables – vtm_rest_port
and vtm_username
have defaults. When thinking about when to set a default, I personally think “is this value going to be applicable in all/most cases where I’ll be using this template?” If yes, then I set the default. If not – we’ll be passing value for it during run time.
Ok, it’s time to give it a go, even though our template doesn’t do anything useful yet! Save both our files, and from inside your try-vtmtf container check that you’re in /root/try-vtmtf
directory and can see your main.tf
and variables.tf
files in it. If we’re good, do this:
ls -l # total 8 # -rw-r--r-- 1 root root 234 Apr 26 03:51 main.tf # -rw-r--r-- 1 root root 423 Apr 26 03:51 variables.tf terraform init # You should see messages saying: # "Initializing provider plugins... # Terraform has been successfully initialized!" terraform plan # Terraform should prompt you for the values of # var.vtm_password and # var.vtm_rest_ip
Give it the values, and if all is good and your try-vtmtf container can reach your vTM – you should see a message informing you in happy green letters: “No changes. Infrastructure is up-to-date.”
To sum it up, in this part we learned:
- What is a Terraform template,
- How to declare a connection to a vTM (cluster), and
- What the variables look like in Terraform and what they’re useful for.
At this point, pour yourself some beverage to celebrate getting your set-up into the shape for the Part 2, where we’ll actually create some useful configuration. 🙂
May 17th, 2018 at 9:31 am
[…] from Part 1, today we’ll make our template do something useful. If you’re following along – make sure […]
May 17th, 2018 at 9:33 am
[…] from Part 1, Part 2, and Part 3, in this final instalment we’ll finish our configuration by adding things […]
May 17th, 2018 at 9:51 am
[…] using freshly-released Terraform Provider for vTM. Please feel free to play around with it. See the my other blog post on intro to Terraform with vTM to get yourself […]