Time goes so fast, it’s hard to believe that a year and a half went by since I started my blogging series about combining Vagrant, VMware and Ansible. To think that at the time I thought I would complete the series within the same month… Anyway, you are now reading part 2, where I will dive into actually provisioning a VMware virtual machine managed by Vagrant using Ansible. If you want to know how to start using Vagrant with VMware, you should read part 1.
Please keep in mind again that technologies are interchangeable here. Combining Ansible and VMware with Vagrant is just my personal choice. This post is about provisioning a machine managed by Vagrant with Ansible. You could be not using Vagrant with a virtual machine, but a physical machine or you could be not using VMware, but VirtualBox instead.
What do you need?
Before you can start provisioning you need to install Ansible on the host machine (your computer or laptop for example) and you should add a configuration section in the Vagrantfile. The Vagrantfile is the configuration file for Vagrant that tells it where to find its base box and how to use it.
Installing Ansible
For linux distributions, Ansible comes as a package that you can install with the appropriate package manager, such as Aptitude for Ubuntu by typing in a terminal window:
apt-get install ansible
For a Mac, as in my case, you can do the same using MacPorts:
port install ansible
Configuring Vagrant to use Ansible
In the Vagrantfile, add the following section:
config.vm.provision "ansible" do |ansible| ansible.verbose = 'v' ansible.playbook = "playbook.yml" ansible.inventory_path = "inventory" end
Also, make sure a specific private network is configured for you machine, so that you know the ip of the machine that Ansible will be provisioning:
# Create a private network, which allows host-only access to the machine # using a specific IP. config.vm.network "private_network", ip: "192.168.4.172"
As you may have guessed, verbose will set the relative amount of output you will see when the Ansible playbook is running. Adding more v’s will increase the amount of output.
The playbook is the script that Ansible uses to do the actual provisioning. More about the playbook later. “playbook.yml” is the path to the playbook, which in this case is relative to the path of the Vagrantfile itself.
The inventory_path is the path to the inventory file, which is used by Ansible to configure which servers are available for provisioning. The playbook will then specify which of these servers actually will be provisioned with the given playbook. Since Ansible is a tool intended to provision entire server clusters, this may seem a bit of a faff when all you want is to provision a single virtual machine for development purposes. If you omit this directive, according to the documentation found here, Vagrant will automatically create an inventory file for all the machines it controls. I did not try this and I like to have precise control over which machines are being provisioned, so therefore I explicitly specify the inventory_path.
The inventory file
The inventory file in my case is simply called inventory and has the following contents:
default ansible_ssh_host=192.168.4.172
Note that the ip in the inventory file corresponds with the ip of the private network specified in the Vagrantfile. In here, default is the name of the machine that will be used in the playbook.
The playbook
Since I started blogging about Ansible, I have successfully tried a few different playbooks, but they where all built up gradually, with trial and error. I do not tend to follow large, elaborate examples, but instead, I add small steps to my playbooks, going from error to error until they do what I want. While this makes me understand every single step in my playbook, it also leads to inefficient playbooks, where tasks that could easily be grouped together are scattered around the playbook, because they were added to it at will, when I needed them. Another drawback of what I used so far, is that I came from a background of using shell scripts for provisioning and in some cases I have literally translated what I had in the shell script to an Ansible task. While shell scripts are a fine and valid way to provision your VM, their approach is different from that of Ansible. If you translate a command from a shell script to a very similar task in Ansible, it is more than likely that you will not be tapping in to the full potential of Ansible modules.
From the above, you may already have guessed that Ansible playbooks are built out of tasks by using modules. A task is something you want to be done on the target machine and a module is a piece of software built into Ansible that make it easy for you to specify a certain task.
With that out of the way, let me try a very simple playbook while trying to be as proficient with modules as I possibly can.
A simple example playbook
To try the example playbook I will start from scratch, so that I do not accidentally omit any steps here. I open up a terminal window, go to my home directory and type
mkdir vagrant-blog
and then
cd vagrant-blog
to enter the newly created directory. In there, I will initialize a new vagrant box, based on the base box I created earlier.
vagrant init base
This should give you the following output:
A `Vagrantfile` has been placed in this directory. You are now ready to `vagrant up` your first virtual environment! Please read the comments in the Vagrantfile as well as documentation on `vagrantup.com` for more information on using Vagrant.
This will initialize a new vagrant environment based on the base box that I created before. ‘base’ in this case is just how I named my base box.
Now I want to make a few adjustments to the Vagrantfile to prepare it for Ansible, but first I want to have a go at vagrant up, just to verify I did not make any mistakes and my machine will be up and running just fine.
vagrant up
The output should look similar to this:
Bringing machine 'default' up with 'vmware_fusion' provider... ==> default: Cloning VMware VM: 'base'. This can take some time... ==> default: Verifying vmnet devices are healthy... ==> default: Pruning invalid NFS exports. Administrator privileges will be required... Password: ==> default: Preparing network adapters... ==> default: Starting the VMware VM... ==> default: Waiting for machine to boot. This may take a few minutes... default: SSH address: 192.168.4.193:22 default: SSH username: vagrant default: SSH auth method: private key default: default: Vagrant insecure key detected. Vagrant will automatically replace default: this with a newly generated keypair for better security. default: default: Inserting generated public key within guest... default: Removing insecure key from the guest if it's present... default: Key inserted! Disconnecting and reconnecting using new SSH key... ==> default: Machine booted and ready! ==> default: Forwarding ports... default: -- 22 => 2222 ==> default: Configuring network adapters within the VM... ==> default: Waiting for HGFS kernel module to load... ==> default: Enabling and configuring shared folders... default: -- /Users/bartmcleod/vagrant-blog: /vagrant
From the output, you can see that the machine has been assigned the local ip address 192.168.4.193. In order to make sure it gets the same ip address next time it is booted, you should specify this ip as the private network ip in the Vagrantfile.
My Vagrantfile now lives in ~/vagrant-blog, I open it and uncomment the line that specifies the private network. I also change the ip address, so that the line now reads:
Now it’s time to add the inventory file, that I will name inventory once more. And this goes in the inventory file:
default ansible_ssh_host=192.168.4.193
There’s two more steps, as we have learned above: the section in the Vagrantfile that configures the provisioning and the playbook itself. If you name the playbook playbook.yml you can use the configuration exactly as described above, but I will add it here just for completeness’ sake.
config.vm.provision "ansible" do |ansible| ansible.verbose = 'v' ansible.playbook = "playbook.yml" ansible.inventory_path = "inventory" end
With that in our Vagrantfile, we are now ready to write our first simple playbook. We will make the playbook install the Apache webserver and display the default website in a browser on the host machine. Please note that my approach to playbooks is really, really simple. You can read more about playbooks on docs.ansible.com, but beware it might dazzle you, so if you only need a development environment in Vagrant, you might want to stick with really simple. For a different approach and a very clear explanation, you may also want to read this post by Adam Brett.
The playbook:
--- - hosts: default vars: http_port: 80 max_clients: 200 ssh_port: 22 remote_user: vagrant sudo_user: root sudo: true tasks: - name: install python properties to be able to use ppa apt: pkg=python-software-properties state=present update_cache=yes - name: install apache apt: pkg=apache2 state=present update_cache=yes
If your vagrant instance is already runnig, you may now run
vagrant provision
If it is not running, you may bring it up and it will be provisioned, because it has never been provisioned before:
vagrant up
If you want to force provisioning when bringing it up, you may type:
vagrant up --provision
Now the machine will be provisioned using ansible and when it’s ready, you may fire up a browser on your host machine and go to http://192.168.4.193/ to see the Apache default page.
Now this is very basic of course, so in my next post, I will follow up explaining an Ansible playbook that will compile PHP 7 for you on Ubuntu 14. Stay tuned!