Appendix C. Provisioning with Ansible

We used Fabric to automate deploying new versions of the source code to our servers. But provisioning a fresh server, and updating the Nginx and Gunicorn config files, was all left as a manual process.

This is the kind of job that’s increasingly given to tools called “Configuration Management” or “Continuous Deployment” tools. Chef and Puppet were the first popular ones, and in the Python world there’s Salt and Ansible.

Of all of these, Ansible is the easiest to get started with. We can get it working with just two files:

pip install ansible  # Python 2 sadly

An “inventory file” at deploy_tools/inventory.ansible defines what servers we can run against:

deploy_tools/inventory.ansible

[live]
superlists.ottg.eu

[staging]
superlists-staging.ottg.eu

[local]
localhost ansible_ssh_port=6666 ansible_host=127.0.0.1

(The local entry is just an example, in my case a Virtualbox VM, with port forwarding for ports 22 and 80 set up.)

Installing System Packages and Nginx

Next the Ansible “playbook”, which defines what to do on the server. This uses a syntax called YAML:

deploy_tools/provision.ansible.yaml

---

- hosts: all

  sudo: yes

  vars:
      host: $inventory_hostname

  tasks:
    - name: make sure required packages are installed
      apt: pkg=nginx,git,python3,python3-pip state=present
    - name: make sure virtualenv is installed
      shell: pip3 install virtualenv

    - name: allow long hostnames in nginx
      lineinfile:
        dest=/etc/nginx/nginx.conf
        regexp='(\s+)#? ?server_names_hash_bucket_size'
        backrefs=yes
        line='\1server_names_hash_bucket_size 64;'

    - name: add nginx config to sites-available
      template: src=./nginx.conf.j2
                dest=/etc/nginx/sites-available/{{ host }}
      notify:
          - restart nginx

    - name: add symlink in nginx sites-enabled
      file: src=/etc/nginx/sites-available/{{ host }}
            dest=/etc/nginx/sites-enabled/{{ host }} state=link
      notify:
          - restart nginx

The vars section defines a variable “host” for convenience, which we can then use in the various filenames and pass to the config files themselves. It comes from $inventory_hostname, which is the domain name of the server we’re running against at the time.

In this section, we install our required software using apt, tweak the Nginx config to allow long hostnames using a regular expression replacer, and then we write the Nginx config file using a template. This is a modified version of the template file we saved into deploy_tools/nginx.template.conf in Chapter 8, but it now uses a specific templating syntax—Jinja2, which is actually a lot like the Django template syntax:

deploy_tools/nginx.conf.j2

server {
    listen 80;
    server_name {{ host }};

    location /static {
        alias /home/harry/sites/{{ host }}/static;
    }

    location / {
        proxy_set_header Host $host;
        proxy_pass http://unix:/tmp/{{ host }}.socket;
    }
}

Configuring Gunicorn, and Using Handlers to Restart Services

Here’s the second half of our playbook:

deploy_tools/provision.ansible.yaml

    - name: write gunicorn init script
      template: src=./gunicorn-upstart.conf.j2
                dest=/etc/init/gunicorn-{{ host }}.conf
      notify:
          - restart gunicorn

    - name: make sure nginx is running
      service: name=nginx state=running
    - name: make sure gunicorn is running
      service: name=gunicorn-{{ host }} state=running

  handlers:
    - name: restart nginx
      service:  name=nginx state=restarted

    - name: restart gunicorn
      service:  name=gunicorn-{{ host }} state=restarted

Once again we use a template for our Gunicorn config:

deploy_tools/gunicorn.upstart.conf.j2

description "Gunicorn server for {{ host }}"

start on net-device-up
stop on shutdown

respawn

chdir /home/harry/sites/{{ host }}/source
exec ../virtualenv/bin/gunicorn \
    --bind unix:/tmp/{{ host }}.socket \
    --access-logfile ../access.log \
    --error-logfile ../error.log \
    superlists.wsgi:application

Then we have two “handlers” to restart Nginx and Gunicorn. Ansible is clever, so if it sees multiple steps all call the same handlers, it waits until the last one before calling it.

And that’s it! The command to kick all these off is:

ansible-playbook -i ansible.inventory provision.ansible.yaml --limit=staging

Lots more info in the Ansible docs.

What to Do Next

I’ve just given a little taster of what’s possible with Ansible. But the more you automate about your deployments, the more confidence you will have in them. Here’s a few more things to look into.

Move Deployment out of Fabric and into Ansible

We’ve seen that Ansible can help with some aspects of provisioning, but it can also do pretty much all of our deployment for us. See if you can extend the playbook to do everything that we currently do in our fabric deploy script, including notifying the restarts as required.

Use Vagrant to Spin Up a Local VM

Running tests against the staging site gives us the ultimate confidence that things are going to work when we go live, but we can also use a VM on our local machine.

Download Vagrant and Virtualbox, and see if you can get Vagrant to build a dev server on your own PC, using our Ansible playbook to deploy code to it. Rewire the FT runner to be able to test against the local VM.

Having a Vagrant config file is particularly helpful when working in a team—it helps new developers to spin up servers that look exactly like yours.

Get Test-Driven Development with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.