Let’s Terraform the vTM: Part 2 / 4

Let’s Terraform the vTM: Part 2 / 4

Continuing from Part 1, today we’ll make our template do something useful. If you’re following along – make sure your set-up is all good, and the very last exercise of Part 1 completes correctly.

Ready? Let’s carry on!

To remind you what we’re templating, here’s our sample app’s diagram again:

Topology Diagram
Topology Diagram

As far as vTM goes, we have four elements we’ll need to configure:

  • 2 x Pools (Main and API)
  • Traffic IP Group
  • Virtual Server

Note that our playpen setup is missing a few bits and pieces, to say the least. We have a single vTM, no ability to raise Traffic IPs, and no actual backend servers. However, the thing is that the template we’ll create will work just fine if we point it at a different vTM (cluster) in a real environment, and give it the right IPs as the input parameters!

But first things first

Remember how in the last exercise we had to type the input values of the variables by hand? This will get old very quickly, so let’s take care of this. When you run Terraform, it will look in the current directory for the file named terraform.tfvars, that can provide values for the variables defined within the template. Following the reasoning in Part 1, I personally like to keep values in this file that are specific to the copy of the template I’m deploying. Let’s make ours. Create a new file called terraform.tfvars in our try-vtmtf directory, and add the following to it:

vtm_rest_ip = "172.16.0.10"

vtm_password = "abc123"

 

Note that this file often contains sensitive information (as you can see above) such as cloud and login credentials, so it’s an extremely good idea to add terraform.tfvars into your ~/.gitignore_global. The topic of handling sensitive info with terraform is a lot bigger than this blog allows, so keep in mind this file, along with the state files that terraform will create once you run the apply command.

Wait a sec..

Most resources in vTM have a name field that is used by other resources to refer to them; e.g., a Virtual Server refers to the pool by its name. Since our templates could be in theory applied to the same infrastructure (in this case – vTM) multiple times, we need to make sure that we:

  • Can find all bits that a particular instance of a template has created; and
  • Our resource names (assigned in the template!) don’t clash.

Fortunately, terraform supports generation of random strings that are stable within a template instance, but is created anew once instance is destroyed or an entirely new instance of the template is created. Let’s add one.

# Random string to make sure each deployment of this template
# includes unique string in all resources' names. This should allow
# deployment of more than one copy of this template to the same vTM
# cluster as long as the unique things like IP addresses used for
# Traffic IP Groups are taken care of elsewhere.
#
resource "random_string" "instance_id" {
  length  = 4
  special = false
  upper   = false
}

locals {
  # Create a local var with the value of the random instance_id
  uniq_id = "${random_string.instance_id.result}"
}

 

The code above creates a local variable uniq_id, that is a String that is a random mix of 4 lower-case letters and numbers. We’ll use it when we get to name our resources.

Let’s add our first pool

We have two pools, each of which serves different parts of our faux web site. In our diagram we have two nodes in each pool. Let’s start with the Main pool. Edit out main.tf, and add the following to it:

resource "vtm_pool" "main_pool" {
  name     = "${local.uniq_id}_Main-Pool"
  monitors = ["Ping"]

  nodes_table {
    node = "10.1.0.10:80"
  }

  nodes_table {
    node = "10.2.0.10:80"
  }
}

 

Hmm, hold on. Looks like we’ll need to create a separate nodes_table section for each node we want in our pool. But how can we template the configurations where we don’t know the number of pool nodes ahead of time, e.g., when a list of these nodes is passed in as a parameter? It looks like Terraform doesn’t handle this very well at this point; so what do we do?

Fortunately, vTM Provider offers an ability to describe resource parameters that take input as tables, such as nodes_table above, through special data sources. It can then take output of these data sources as the parameter value. Don’t worry, it will come clearer in a moment. Data sources support use of count (which we’ll talk about below soon), so we’re in luck! Let’s see what this looks like.

Replace the resource "vtm_pool" "main_pool" { .. } that you’ve just added (per above) with the following:

data "vtm_pool_nodes_table_table" "main_pool_nodes" {
  # Repeat as many times as we have nodes in our node list variable
  count = "${length(var.main_nodes)}"

  # Get the node from the var.main_nodes list
  node = "${var.main_nodes[count.index]}"
}

resource "vtm_pool" "main_pool" {
  name     = "${local.uniq_id}_Main-Pool"
  monitors = ["Ping"]

  # The data.vtm_pool_nodes_table_table.main_pool_nodes.*.json returns a list
  # of string, each string is a JSON for one node. We need to wrap this into
  # a JSON list, so we add "[]" at the ends, and join the node strings with ","
  # in the middle for a resulting "[{..},{..}]" string that nodes_table_json
  # is expecting
  #
  nodes_table_json = "[${join(",", data.vtm_pool_nodes_table_table.main_pool_nodes.*.json)}]"
}

 

Ok, there’s a fair bit going on above. First of all, we have a new variable, main_nodes, which we’ll need to add to our variables.tf in a moment.

Second, vtm_pool_nodes_table_table is a Data Source, which is a type of special read-only resource. This particular one is designed to create a JSON string populated with values for specified parameters. In our case, it’s a pool node parameter node.

You can also see a parameter count, that is set to the length of variable main_nodes. Since this variable is of a type list, length will equal the number of entries in it. In our case, the list will be populated with IP:Port strings for our pool nodes, e.g., "10.1.0.10:80" and "10.2.0.10:80".

The way count works is that when it’s present in a resource or data source, terraform will create count many copies of this resource or data source. In our case there will be two, each of which will return a JSON representation of the corresponding node, with value for the node derived from a count-th element within the main_nodes list.

Then we get to our pool. The first thing you’ll notice is that value for the name parameter is made up by adding our uniq_id string to the literal _Main-Pool, so as discussed above we avoid potential clash, and can identify bits of vTM config that belong together.

The second bit is that nodes_table_json parameter is formed by combining the outputs of count outputs of the vtm_pool_nodes_table_table into a string that conforms to syntax of a JSON table. It works like this:

  • Each vtm_pool_nodes_table_table output is a {}
  • join() connects them all into one string joined by ,, essentially {},{}
  • Finally, they are combined with literal [ and ] at the start and the end, to a resulting [{},{}] JSON string we want.

The cool thing about it is that it will work irrespective of how many items are in the main_nodes list – zero or a hundred, so all we need to do is to generate a main_nodes list that matches our environment, and terraform will do the rest.

Whew. Ok, let’s add our new variable main_nodes to our variables.tf and terraform.tfvars, and give it a spin!

variables.tf:

variable "main_nodes" {
  description = "List of nodes for the 'Main' pool"

  # For example, ["1.1.1.1:80", "2.2.2.2:80"]
  default = []
}

 

terraform.tfvars:

main_nodes = ["10.1.0.10:80", "10.2.0.10:80"]

 

Note that variable "main_nodes" is supplied with a default value of []. This will cause Terraform to not prompt you for the value of this variable if you have not specified it, for example, through terraform.tfvars as above.

I did this because a configuration where a pool has no nodes it technically valid and useful. An example would be a situation where my configuration may evolve and change when I repeatedly apply the same template, but with different input parameters. I may start with the empty list of nodes in one pool but add them later; while the rest of the configuration will be ready in place.

Now, let’s see what this can do! In our try-tfvtm Docker container (or your own setup), from inside our try-tfvtm directory, run: terraform init, followed by terraform plan. You need to run the init again because we’ve added a new provider to our template – random, and terraform needs to download a copy of that Provider plugin from the Internet.

If all went well, you should see terraform think for a little while, then spit out a bit of output ending with something like:

**Plan: 2 to add, 0 to change, 0 to destroy.**

 

Take your time to have a look at the output. This is the representation of what terraform will do, once you run terraform apply. You’ll notice that terraform shows many more parameters for the resource we’ve defined (vtm_pool.main_pool). The ones that we haven’t specified are the defaults. This output is often useful in determining how your resource will be configured in the parts that you didn’t specify; bau also for figuring out what parameters your resource has for you to play with. 🙂

Before wrapping up this part, let’s run the terraform apply, and see what happens.

Terraform will again display the plan output and ask you to type “yes” to confirm that you let it make the proposed changes. If you don’t want to be prompted, add -auto-approve to the end of the command, so it reads in full as terraform apply -auto-approve

If all went well, you should see:

**Apply complete! Resources: 2 added, 0 changed, 0 destroyed.**

 

You can wrap up with that, or go to https://172.16.0.10:9090 (change the 172.16.0.10 to your vTM’s IP), and visit Services -> Pools, where you should see a new pool named [4 random symbols]_Main-Pool with 2 nodes in it – 10.1.0.10 and 10.2.0.10, both with port 80.

To sum it up, in this part we learned:

  • How to save values for Terraform variable parameters in terraform.tfvars file,
  • How to name vTM resources to avoid conflicts between multiple instances of a template applied to the same vTM,
  • What are the Data Sources and Resources, and
  • How to create a vTM Pool with a variable number of nodes in it.

See you in Part 3!

About Dmitri Kalintsev

Some dude with a blog and opinions ;) View all posts by Dmitri Kalintsev

2 responses to “Let’s Terraform the vTM: Part 2 / 4

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: