composition of spoonfuls with various spices for healthy food preparing

vRA 8.3 – Saltstack Config – Getting started. Part 1

Like many others, Ansible is my tool of choice for configuring endpoints once the VM is provisioned. With the introduction of Saltstack Config (Enterprise) into the vRA bundle from 8.3, I wanted to test this as an alternative. Beyond the obvious integration advantages of utilising Saltstack, the piece of functionality that interests me the most was the beacon and reactor capability of Salt. All of a sudden I have a centralised ability to enforce consistency across my estate above and beyond initial deloyment, without the need for additional tooling like Puppet.

This is the first in a series of Saltstack posts and covers basic agent integration and state files.

Step 1 – Sort out your template

Firstly, Salt requires cloud-init on your base template. This is used to configure the salt minion agent. Up to this point I have steered away from cloud-init due to its incompatibity with vSphere custom spec files, but now I need a workaround and actually, this is really clearly writen up in this great vnuggets post:

Step 2 – Cloud Assembly Template

When vRA 8.3 is integrated with Salt during install via vRLCM, a property group is automatically added to Cloud Assembly named ‘SaltStackConfiguration’.

SaltStack property group
SaltStackConfiguration Automatic Property Group

Within this group there are two properties:

  • masterAddress – This points to the Saltstack appliance
  • masterFingerPrint – The certificate thumbprint
Saltstack default properties

Below is the cloud template yaml

type: Cloud.vSphere.Machine
minionId: '${self.resourceName}'
image: base-centos7
cpuCount: 2
totalMemoryMB: 2048
- network: '${}'
assignment: static
cloudConfig: |
preserve-hostname: false
hostname: ${self.resourceName}
fqdn: ${self.resourceName}.tg.local
version: 1
- type: physical
name: ens192
- type: static
address: ${resource.Cloud_vSphere_Machine_1.networks.address[0]/resource.Cloud_vSphere_Network_1.prefixLength}
gateway: ${resource.Cloud_vSphere_Network_1.gateway}
- sudo echo '${self.resourceName}' > /etc/salt/minion_id
pkg_name: 'salt-minion'
service_name: 'salt-minion'
config_dir: '/etc/salt'
- tg-datacenter
master: ${propgroup.SaltStackConfiguration.masterAddress}
master_finger: ${propgroup.SaltStackConfiguration.masterFingerPrint}

The key takeaways from the above code are…

minionId: '${self.resourceName}'
  • Setting a minionId to a readable and usable identifier will make administration through the Saltstack interface considerably easier. This adds a property to the VM so is more informational than anything else.
- sudo echo '${self.resourceName}' > /etc/salt/minion_id
  • push the vm name to the minion_id file.
pkg_name: 'salt-minion'
service_name: 'salt-minion'
config_dir: '/etc/salt'
master: ${propgroup.SaltStackConfiguration.masterAddress}
master_finger: ${propgroup.SaltStackConfiguration.masterFingerPrint}

This section installs the salt agent and configures it using the properties from the new property group as described above.

Step 3 – Test a deployment

So, on my first deployment attempt the salt minion failed the install. I am not entirely sure why at this stage but I suspect its because I didnt have a repo configured with the relevant rpm, so for testing purposes I updated the template with the salt minion agent installed (but not configured). Second time round it was all good, as you can see below the minion gets configured on first boot.

First boot test

When logging into Saltstack, within ‘Pending’ under ‘Minion Keys’ you can see that the new VM is pending approval for connection, proving the configuration in the template worked successfully. Out of the box you need to manually approve minions into the saltmaster, however this can be an automated process, something I will cover in a separate post which goes into Reactors.

Environments / File Server & State Files

vRA Saltstack comes with an inbuilt file server and two pre-defined ‘environments’. These can be viewed from ‘Config’ > ‘File Server’ (see for specifics on environments). These environments are ‘base’ and ‘sse’. For the purposes of testing I am using base as my environment.

Saltstack file server / environments

Within the base folder, I have created a top.sls file. This file manages the grouping of machines, the criteria for the groups of machines, and then the state files to be executed per group. State files (*.sls) are the bread and butter of Saltstack. They are comparable to playbooks in Ansible. Multiple state files can be executed against machines.

Breaking down the below file:

  • Line 1 – The environment to deploy against (base).
  • Lines 2 & 3 – The criteria for my group – In this example I have added a grain to my minion. This is additional metadata which could help inform groups, build in business logic etc. The grain is a K/V pair of key -> deployment, value -> tg-datacenter.
  • Line 4 – This is saying..look in the general folder for a sls file called apache_install.sls. The file extension is implied. Additional state files can be added and they will be executed in this order.
  • Line 5 – Slightly different from line 4, this line references the presence folder. As a file is not specifically referenced, it will look for an ‘init.sls’ file by default.

This folder structure within an environment can be as simple or complicated as your infrastrucure requires, but in my lab setup I have a folder named general – these are state files which will apply to all workload servers. There is a second folder called master, this holds state files specifically for the saltmaster appliance. Finally, the presence folder is a predefined folder on the Saltstack appliance which holds a default state file to enable ‘presence’ (aka minion heartbeating) on minions.

Top file example

Executing a state file

The Saltstack GUI can run jobs on a ad-hoc basis or on a schedule. For the purposes of this example, I’m going to run an ad-hoc job.

Within ‘Config’ > ‘Jobs’ a top file can be executed via the Highstate job. Click the dots and choose ‘Run Now’

Choose the targets which you wish to run highstate against and hit run now. (for this test I just chose all CentOS minions, however in production I would create target groups based upon criteria i.e grain k/v membership)

Job progress can be found within Activity > in progress / completed. Obviously the ideal output is a returned > ‘true’ and Errors > ‘False’. The job can be drilled into within the interface and additional data can be extracted.

Job Summary
Job output drill down

This is a very brief overview of minion deployment, the file server and executing job files against the base environment. Keep an eye out for my next post, covering beacons, reactors and more!

Leave a Reply