Something-driven development

Software development thoughts around Ubuntu, Python, Golang and other tools

Using Apache2’s mod_proxy to transition traffic

with 3 comments

I was recently in the situation of wanting to transition traffic gradually from an old deployment to a new deployment. It’s a large production system, so rather than just switching the DNS entries to point at the new deployment, I wanted to be able to shift the traffic over in a couple of controlled steps.

It turns out, Apache’s mod_proxy makes this relatively straight forward. You can choose which resource for which you want to move traffic, and easily update the proportion of traffic for that resource which should go through to the new env. Might be old news to some, but not having needed this before, I was quite impressed by Apache2’s configurability:

# Pass any requests for specific-url through to the balancer (defined below)
# to transition traffic from the old to new system.
ProxyPass /myapp/specific-url/ balancer://transition-traffic/myapp/specific-url/
ProxyPassReverse /myapp/specific-url/ balancer://transition-traffic/myapp/specific-url/

# Send all other requests straight to the backend for the old system.
ProxyPass /myapp/ http://old.backend.ip:1234/myapp/
ProxyPassReverse /myapp/ http://backend.ip:1234/myapp/

# Send 50% of the traffic to the old backend, and divide the rest between the
# two new frontends.
<Proxy balancer://transition-traffic>
    BalancerMember http://old.backend.ip:1234 timeout=60 loadfactor=2
    BalancerMember http://new.frontend1.ip:80 timeout=60 loadfactor=1
    BalancerMember http://new.frontend2.ip:80 timeout=60 loadfactor=1
    ProxySet lbmethod=byrequests
</Proxy>

Once the stats verify that the new env isn’t hitting any firewall or load issue, the loadfactor can be updated (only need to graceful apache) to ramp up traffic so that everything is hitting the new env. Of course, it adds one extra hop for serving requests, but it’s then much safer to switch the DNS entries when you *know* your new system is already handling the production traffic.

Written by Michael

July 17, 2015 at 6:37 am

Posted in Uncategorized

Creating a Mir Snap in the cloud

leave a comment »

I tend to do all my work in the cloud (and work physically from a useful but comparatively powerless chromebook running Ubuntu). So creating a Mir Snap on my local machine wasn’t an option I was keen to try.

Mainly for my own documentation, the first attempt at creating a Mir snap (a display server for experimenting with Kodi on Ubuntu Snappy), went like this:

First spin up an Ubuntu Wily development instance in a(ny) openstack cloud (I’m using a Canonical internal one):

$ nova boot --flavor cpu2-ram4-disk100-ephemeral20 --image 1461273c-af73-4839-9f64-3df00446322a --key-name ******** wily_dev

Less than a minute later, ssh in and use the existing snappy packaging branch for mir:


$ sudo add-apt-repository ppa:snappy-dev/tools
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install snappy-tools bzr cmake gcc-multilib ubuntu-snappy-cli mir-demos mir-graphics-drivers-desktop dpkg-dev

$ bzr branch lp:~mir-team/mir/snappy-packaging && cd snappy-packaging && make

The last step uses deb2snap to create the snap package from the installed deb packages. So, the (76M) snap is for the amd64 architecture – as the instance I created is amd64:


$ ls -lh mir_snap1_amd64.snap
-rw-rw-r-- 1 ubuntu ubuntu 76M Jun 21 11:20 mir_snap1_amd64.snap

Next up… using either QEMU or an ARM cloud instance (I’m not sure that the latter is available) to create an ARM Mir snap for my Raspberry Pi, and testing it out…

Written by Michael

June 21, 2015 at 1:25 pm

Posted in Uncategorized

A Kodi (XBMC) snap for the RaspberryPi?

leave a comment »

orange-matchboxI just received an Orange Matchbox today – a Raspberry Pi 2B with a PiGlow and Ubuntu-engraved casing – to try out Snappy Ubuntu.

As a first project, I want to see if it’s possible to get Kodi (formally XBMC) packaged as a snap. I’m not yet sure if this is possible, but the plan is to install Ubuntu on a separate microSD and setup deb2snap and see whether I can create a snap (minus certain dependencies). Obviously then install the snap on my snappy microSD and profit! (The devel’s in the details of course.)

I’m not sure that deb2snap will work with Ubuntu Trusty (the default image for RPi2), as it was recently announced that XMir will no longer be supported for anything less than Wily. We’ll see how it goes.

I already found and reported my first issue while testing snappy on a VM last week, which got fixed for todays stable release. Let the fun begin…

Written by Michael

June 11, 2015 at 9:14 am

Posted in Uncategorized

A walk-through charming an existing wsgi application

with 2 comments

After recently writing the re-usable wsgi-app role for juju charms, I wanted to see how easy it would be to apply this to an existing service. I chose the existing ubuntu-reviews service because it’s open-source and something that we’ll probably need to deploy with juju soon (albeit only the functionality for reviewing newer click applications).

I’ve tried to include the workflow below that I used to create the charm including the detailed steps and some mistakes, in case it’s useful for others. If you’re after more of an overview about using reusable ansible roles in your juju charms, checkout these slides.

1. Create the new charm from the bootstrap-wsgi charm

First grab the charm-bootstrap-wsgi code and rename it to ubuntu-reviews:

$ mkdir -p ~/charms/rnr/precise && cd ~/charms/rnr/precise
$ git clone https://github.com/absoludity/charm-bootstrap-wsgi.git
$ mv charm-bootstrap-wsgi ubuntu-reviews && cd ubuntu-reviews/

Then update the charm metadata to reflect the ubuntu-reviews service:

--- a/metadata.yaml
+++ b/metadata.yaml
@@ -1,6 +1,6 @@
-name: charm-bootstrap-wsgi
-summary: Bootstrap your wsgi service charm.
-maintainer: Developer Account <Developer.Account@localhost>
+name: ubuntu-reviews
+summary: A review API service for Ubuntu click packages.
+maintainer: Michael Nelson <michael.nelson@canonical.com>
 description: |
   <Multi-line description here>
 categories:

I then updated the playbook to expect a tarball named rnr-server.tgz, and gave it a more relevant app_label (which controls the directories under which the service is setup):

--- a/playbook.yml
+++ b/playbook.yml
@@ -2,13 +2,13 @@
 - hosts: localhost
 
   vars:
-    - app_label: wsgi-app.example.com
+    - app_label: click-reviews.ubuntu.com
 
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: example_wsgi:application
-      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
+      wsgi_application: wsgi:app
+      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''

Although when deploying this service we’d be deploying a built asset published somewhere else, for development it’s easier to work with a local file in the charm – and the reusable wsgi-app role allows you to switch between those two options. So the last step here is to add some Makefile targets to enable pulling in the application code and creating the tarball (you can see the details on the git commit for this step).

With that done, I can deploy the charm with `make deploy` – expecting it to fail because the wsgi object doesn’t yet exist.

Aside: if you’re deploying locally, it’s sometimes useful to watch the deployment progress with:

$ tail -F ~/.juju/local/log/unit-ubuntu-reviews-0.log

At this point, juju status shows no issues, but curling the service does (as expected):

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080"
curl: (56) Recv failure: Connection reset by peer

Checking the logs (which, oh joy, are already setup and configured with log rotation etc. by the wsgi-app role) shows the expected error:

$ juju ssh ubuntu-reviews/0 "tail -F /srv/reviews.click.ubuntu.com/logs/reviews.click.ubuntu.com-error.log"
...
ImportError: No module named wsgi

2. Adding the wsgi, settings and urls

The current rnr-server code does have a django project setup, but it’s specific to the current setup (requiring a 3rd party library for settings, and serves the non-click reviews service too). I’d like to simplify that here, so I’m bundling a project configuration in the charm. First the standard (django-generated) manage.py, wsgi.py and near-default settings.py (which you can see in the actual commit).

An extra task is added which copies the project configuration into place during install/upgrade, and both the wsgi application location and the required PYTHONPATH are specified:

--- a/playbook.yml
+++ b/playbook.yml
@@ -7,7 +7,8 @@
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: wsgi:app
+      python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
+      wsgi_application: clickreviewsproject.wsgi:application
       code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
@@ -18,20 +19,18 @@
 
   tasks:
 
-    # Pretend there are some package dependencies for the example wsgi app.
     - name: Install any required packages for your app.
-      apt: pkg={{ item }} state=latest update_cache=yes
+      apt: pkg={{ item }}
       with_items:
-        - python-django
-        - python-django-celery
+        - python-django=1.5.4-1ubuntu1~ctools0
       tags:
         - install
         - upgrade-charm
 
     - name: Write any custom configuration files
-      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
+      copy: src=django-project dest={{ application_dir }} owner={{ wsgi_user }} group={{ wsgi_group }}
       tags:
-        - config-changed
-        # Also any backend relation-changed events, such as databases.
+        - install
+        - upgrade-charm
       notify:
         - Restart wsgi

I can now run juju upgrade-charm and see that the application now responds, but with a 500 error highlighting the first (of a few) missing dependencies…

3. Adding missing dependencies

At this point, it’s easiest in my opinion to run debug-hooks:

$ juju debug-hooks ubuntu-reviews/0

and directly test and install missing dependencies, adding them to the playbook as you go:

ubuntu@michael-local-machine-1:~$ curl http://localhost:8080 | less
ubuntu@michael-local-machine-1:~$ sudo apt-get install python-missing-library -y (and add it to the playbook in your charm editor).
ubuntu@michael-local-machine-1:~$ sudo service gunicorn restart

Rinse and repeat. You may want to occasionally run upgrade-charm in a separate terminal:

$ juju upgrade-charm --repository=../.. ubuntu-reviews

to verify that you’ve not broken things, running the hooks as they are triggered in your debug-hooks window.

Other times you might want to destroy the environment to redeploy from scratch.

Sometimes you’ll have dependencies which are not available in the distro. For our deployments,
we always ensure we have these included in our tarball (in some form). In the case of the reviews server, there was an existing script which pulls in a bunch of extra branches, so I’ve updated to use that in the Makefile target that builds the tarball. But there was another dependency, south, which wasn’t included by that task, so for simplicity here, I’m going to install that via pip (which you don’t want to do in reality – you’d update your build process instead).

You can see the extra dependencies in the commit.

4. Customise settings

At this point, the service deploys but errors due to a missing setting, which makes complete sense as I’ve not added any app-specific settings yet.

So, remove the vanilla settings.py and add a custom settings.py.j2 template, and add an extra django_secret_key option to the charm config.yaml:

--- a/config.yaml
+++ b/config.yaml
@@ -10,8 +10,13 @@ options:
     current_symlink:
         default: "latest"
         type: string
-        description: |
+        description: >
             The symlink of the code to run. The default of 'latest' will use
             the most recently added build on the instance.  Specifying a
             differnt label (eg. "r235") will symlink to that directory assuming
             it has been previously added to the instance.
+    django_secret_key:
+        default: "you-really-should-set-this"
+        type: string
+        description: >
+            The secret key that django should use for each instance.

--- /dev/null
+++ b/templates/settings.py.j2
@@ -0,0 +1,41 @@
+DEBUG = True
+TEMPLATE_DEBUG = DEBUG
+SECRET_KEY = '{{ django_secret_key }}'
+TIME_ZONE = 'UTC'
+ROOT_URLCONF = 'clickreviewsproject.urls'
+WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
+INSTALLED_APPS = (
+    'django.contrib.auth',
+    'django.contrib.contenttypes',
+    'django.contrib.sessions',
+    'django.contrib.sites',
+    'django_openid_auth',
+    'django.contrib.messages',
+    'django.contrib.staticfiles',
+    'clickreviews',
+    'south',
+)
...

Again, full diff in the commit.

With the extra settings now defined and one more dependencie added, I’m now able to ping the service successfully:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

But that only works because the input validation is fired before anything tries to use the (non-existent) database…

5. Adding the database relation

I had expected this step to be simple – add the database relation, call syncdb/migrate, but there were three factors conspiring to complicate things:

  1. The reviews application uses a custom postgres “debversion” column type
  2. A migration for the older non-click reviewsapp service requires postgres superuser creds (specifically, reviews/0013_add_debversion_field.py
  3. The clickreviews app, which I wanted to deploy on its own, is currently dependent on the older non-click reviews app, so it’s necessary to have both enabled and therefore the tables from both sync’d

So to work around the first point, I split the ‘deploy’ task into two tasks so I can install and setup the custom debversion field on the postgres units, before running syncdb:

--- a/Makefile
+++ b/Makefile
@@ -37,8 +37,17 @@ deploy: create-tarball
        @juju set ubuntu-reviews build_label=r$(REVIEWS_REVNO)
        @juju deploy gunicorn
        @juju deploy nrpe-external-master
+       @juju deploy postgresql
        @juju add-relation ubuntu-reviews gunicorn
        @juju add-relation ubuntu-reviews nrpe-external-master
+       @juju add-relation ubuntu-reviews postgresql:db
+       @juju set postgresql extra-packages=postgresql-9.1-debversion
+       @echo "Once the services are related, run 'make setup-db'"
+
+setup-db:
+       @echo "Creating custom debversion postgres type and syncing the ubuntu-reviews database."
+       @juju run --service postgresql 'psql -U postgres ubuntu-reviews -c "CREATE EXTENSION debversion;"'
+       @juju run --unit=ubuntu-reviews/0 "hooks/syncdb"
        @echo See the README for explorations after deploying.

I’ve separated the syncdb out to a manual step (rather than running syncdb on a hook like config-changed) so that I can run it after the postgresql service had been updated with the custom field, and also because juju doesn’t currently have a concept of a leader (yes, syncdb could be run on every unit, and might be safe, but I don’t like it conceptually).

--- a/playbook.yml
+++ b/playbook.yml
@@ -3,13 +3,13 @@
 
   vars:
     - app_label: click-reviews.ubuntu.com
+    - code_archive: "{{ build_label }}/rnr-server.tgz"
 
   roles:
     - role: wsgi-app
       listen_port: 8080
       python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
       wsgi_application: clickreviewsproject.wsgi:application
-      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
     - role: nrpe-external-master
@@ -25,6 +25,7 @@
         - python-django=1.5.4-1ubuntu1~ctools0
         - python-tz
         - python-pip
+        - python-psycopg2
       tags:
         - install
         - upgrade-charm
@@ -46,13 +47,24 @@
       tags:
         - install
         - upgrade-charm
+        - db-relation-changed
       notify:
         - Restart wsgi
+
     # XXX Normally our deployment build would ensure these are all available in the tarball.
     - name: Install any non-distro dependencies (Don't depend on pip for a real deployment!)
       pip: name={{ item.key }} version={{ item.value }}
       with_dict:
         south: 0.7.6
+        django-openid-auth: 0.2
       tags:
         - install
         - upgrade-charm
+
+    - name: sync the database
+      django_manage: >
+        command="syncdb --noinput"
+        app_path="{{ application_dir }}/django-project"
+        pythonpath="{{ code_dir }}/current/src"
+      tags:
+        - syncdb

To work around the second and third points above, I updated to remove the ‘south’ django application so that syncdb will setup the correct current tables without using migrations (while leaving it on the pythonpath as some code imports it currently), and added the old non-click reviews app to satisfy some of the imported dependencies.

By far the most complicated part of this change though, was the database settings:

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -1,6 +1,31 @@
 DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
+
+{% if 'db' in relations %}
+DATABASES = {
+  {% for dbrelation in relations['db'] %}
+  {% if loop.first %}
+    {% set services=relations['db'][dbrelation] %}
+    {% for service in services %}
+      {% if service.startswith('postgres') %}
+      {% set dbdetails = services[service] %}
+        {% if 'database' in dbdetails %}
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ dbdetails["database"] }}',
+        'HOST': '{{ dbdetails["host"] }}',
+        'USER': '{{ dbdetails["user"] }}',
+        'PASSWORD': '{{ dbdetails["password"] }}',
+    }
+        {% endif %}
+      {% endif %}
+    {% endfor %}
+  {% endif %}
+  {% endfor %}
+}
+{% endif %}
+
 TIME_ZONE = 'UTC'
 ROOT_URLCONF = 'clickreviewsproject.urls'
 WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
@@ -12,8 +37,8 @@ INSTALLED_APPS = (
     'django_openid_auth',
     'django.contrib.messages',
     'django.contrib.staticfiles',
+    'reviewsapp',
     'clickreviews',
-    'south',
 )
 AUTHENTICATION_BACKENDS = (
     'django_openid_auth.auth.OpenIDBackend',

Why are the database settings so complicated? I’m currently rendering all the settings on any config change as well as database relation change which means I need to use the global juju state for the relation data, and as far as juju knows, there could be multiple database relations for this stack. The current charm helpers gives you this as a dict (above it’s ‘relations’). The ‘db’ key of that dict contains all the database relations, so I’m grabbing the first database relation and taking the details from the postgres side of that relation (there are two keys for the relation, one for postgres, the other for the connecting service). Yuk (see below for a better solution).

Full details are in the commit. With those changes I can now use the service:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

A. Cleanup: Simplify database settings

In retrospect, the complicated database settings are only needed because the settings are rendered on any config change, not just when the database relation changes, so I’ll clean this up in the next step by moving the database settings to a separate file which is only written on the db-relation-changed hook where we can use the more convenient current_relation dict (given that I know this service only has one database relationship).

This simplifies things quite a bit:

--- a/playbook.yml
+++ b/playbook.yml
@@ -45,11 +45,21 @@
         owner: "{{ wsgi_user }}"
         group: "{{ wsgi_group }}"
       tags:
-        - install
-        - upgrade-charm
+        - config-changed
+      notify:
+        - Restart wsgi
+
+    - name: Write the database settings
+      template:
+        src: "templates/database_settings.py.j2"
+        dest: "{{ application_dir }}/django-project/clickreviewsproject/database_settings.py"
+        owner: "{{ wsgi_user }}"
+        group: "{{ wsgi_group }}"
+      tags:
         - db-relation-changed
       notify:
         - Restart wsgi
+      when: "'database' in current_relation"
 
--- /dev/null
+++ b/templates/database_settings.py.j2
@@ -0,0 +1,9 @@
+DATABASES = {
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ current_relation["database"] }}',
+        'HOST': '{{ current_relation["host"] }}',
+        'USER': '{{ current_relation["user"] }}',
+        'PASSWORD': '{{ current_relation["password"] }}',
+    }
+}

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -2,29 +2,7 @@ DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
 
-{% if 'db' in relations %}
-DATABASES = {
-  {% for dbrelation in relations['db'] %}
-  {% if loop.first %}
-    {% set services=relations['db'][dbrelation] %}
-    {% for service in services %}
-      {% if service.startswith('postgres') %}
-      {% set dbdetails = services[service] %}
-        {% if 'database' in dbdetails %}
-    'default': {
-        'ENGINE': 'django.db.backends.postgresql_psycopg2',
-        'NAME': '{{ dbdetails["database"] }}',
-        'HOST': '{{ dbdetails["host"] }}',
-        'USER': '{{ dbdetails["user"] }}',
-        'PASSWORD': '{{ dbdetails["password"] }}',
-    }
-        {% endif %}
-      {% endif %}
-    {% endfor %}
-  {% endif %}
-  {% endfor %}
-}
-{% endif %}
+from database_settings import DATABASES
 
 TIME_ZONE = 'UTC'

With those changes, the deploy still works as expected:
$ make deploy
$ make setup-db
Creating custom debversion postgres type and syncing the ubuntu-reviews database.
...
$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

Summary

Reusing the wsgi-app ansible role allowed me to focus just on the things that make this service different from other wsgi-app services:

  1. Creating the tarball for deployment (not normally part of the charm, but useful for developing the charm)
  2. Handling the application settings
  3. Handling the database relation and custom setup

The wsgi-app role already provides the functionality to deploy and upgrade code tarballs via a config set (ie. `juju set ubuntu-reviews build_label=r156`) from a configured url. It provides the functionality for the complete directory layout, user and group setup, log rotation etc. All of that comes for free.

Things I’d do if I was doing this charm for a real deployment now:

  1. Use the postgres db-admin relation for syncdb/migrations (thanks Simon) and ensure that the normal app uses the non-admin settings (and can’t change the schema etc.) This functionality could be factored out into another reusable role.
  2. Remove the dependency on the older non-click reviews app so that the clickreviews service can be deployed in isolation.
  3. Ensure all dependencies are either available in the distro, or included in the tarball

Written by Michael

June 25, 2014 at 11:54 am

Posted in cloud automation, juju

Reusable ansible roles for Juju charms

with 3 comments

I’ve been writing Juju charms to automate the deployment of a few different services at work, which all happen to be wsgi applications… some Django apps, others with other frameworks. I’ve been using the ansible support for writing charms which makes charm authoring simpler, but even then, essentially each wsgi service charm needs to do the same thing:

  1. Setup specific users
  2. Install the built code into a specific location
  3. Install any package dependencies
  4. Relate to a backend (could be postgresql, could be elasticsearch)
  5. Render the settings
  6. Setup a wsgi (gunicorn) service (via a subordinate charm)
  7. Setup log rotation
  8. Support updating to a new codebase without upgrading the charm
  9. Support rolling updates a new codebase

Only three of the above really change slightly from service to service: which package dependencies are required, the rendering of the settings and the backend relation (which usually just causes the settings to be rerendered anyway).

After trying (and failing) to create a nice reusable wsgi-app charm, I’ve switched to utilise ansible’s built-in support for reusable roles and created a charm-bootstrap-wsgi repo on github, which demonstrates all of the above out of the box (the README has an example rolling upgrade). The charm’s playbook is very simple, just reusing the wsgi-app role:

 

roles:
    - role: wsgi-app
      listen_port: 8080
      wsgi_application: example_wsgi:application
      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
      when: build_label != ''

 

and only needs to do two things itself:

tasks:
    - name: Install any required packages for your app.
      apt: pkg={{ item }} state=latest update_cache=yes
      with_items:
        - python-django
        - python-django-celery
      tags:
        - install
        - upgrade-charm

    - name: Write any custom configuration files
      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
      tags:
        - config-changed
        # Also any backend relation-changed hooks for databases etc.
      notify:
        - Restart wsgi

 

Everything else is provided by the reusable wsgi-app role. For the moment I’ve got the source of the reusable roles in a separate github repo, but I’d like to get these into the charm-helpers project itself eventually. Of course there will be cases where the service charm may need to do quite a bit more custom functionality, but if we can encapsulate and reuse as much as possible, it’s a win for all of us.

If you’re interested in chatting about easier charms with ansible (or any issues you can see), we’ll be getting together for a hangout tomorrow (Jun 11, 2014 at 1400UTC).

Written by Michael

June 10, 2014 at 6:22 pm

Posted in ansible, juju, python

Deploying swift with juju for development

with 2 comments

Just documenting for later (and for a friend and colleague who needs it now) – my notes for setting up openstack swift using juju. I need to go back and check whether keystone is required – I initially had issue with the test auth so switched to keystone.

First, create the config file to use keystone, local block-devices on the swift storage units (ie. no need to mount storage), and using openstack havana:

cat >swift.cfg <<END
swift-proxy:
    zone-assignment: auto
    replicas: 3
    auth-type: keystone
    openstack-origin: cloud:precise-havana/updates
swift-storage:
    zone: 1
    block-device: /etc/swift/storagedev1.img|2G
    openstack-origin: cloud:precise-havana/updates
keystone:
    admin-token: somebigtoken
    openstack-origin: cloud:precise-havana/updates
END

Deploy it (this could probably be replaced with a charm bundle?):

juju deploy --config=swift.cfg swift-proxy
juju deploy --config=swift.cfg --num-units 3 swift-storage
juju add-relation swift-proxy swift-storage
juju deploy --config=swift.cfg keystone
juju add-relation swift-proxy keystone

Once everything is up and running, create a tenant and user with the user having admin rights for the tenant (using your keystone unit’s IP address for keystone-ip). Note, below I’m using the names of tenant, user and role – which works with keystone 0.3.2, but apparently earlier versions require you to use the uuids instead. Check with `keystone help user-role-add`).

$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken tenant-create --name mytenant
$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken user-create --name myuser --tenant mytenant --pass userpassword
$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken user-role-add --tenant mytenant --user myuser --role Admin

And finally, use our new admin user to create a container for use in our dev environment (specify auth version 2):

$ export OS_REGION_NAME=RegionOne
$ export OS_TENANT_NAME=mytenant
$ export OS_USERNAME=myuser
$ export OS_PASSWORD=userpassword
$ export OS_AUTH_URL=http://keystone-ip:5000/v2.0/
$ swift -V 2 post mycontainer

If you want the container to be readable without auth:

$ swift -V 2 post mycontainer -r '.r:*'

If you want another keystone user to have write access:

$ swift -V 2 post mycontainer -w mytenant:otheruser

Verify that the container is ready for use:
$ swift -V 2 stat mycontainer

Please let me know if you spot any issues (these notes are from a month or two ago, so I haven’t just tried this).

Written by Michael

January 9, 2014 at 10:54 am

Posted in Uncategorized

Tagged with , ,

Bootstrap your ansible-powered juju charm

with 4 comments

After working with InformatiQ to setup a new charm using the ansible support (and ironing out a few issues), it made sense to capture the process…

The README at charm-bootstrap-ansible has the details, but the branch will pull in the required charm-helpers library and run the tests, leaving you ready to deploy and explore.

Hopefully I can get this into the charm-create tool eventually.

Written by Michael

November 20, 2013 at 5:35 pm

Posted in bzr, juju

Juju + Ansible = simpler charms

with 7 comments

I’ve been working on some more support for ansible in the juju charm-helpers package recently [1], which has effectively transformed my juju charm’s hooks.py to something like:

# Create the hooks helper, passing a list of hooks which will be
# handled by default by running all sections of the playbook
# tagged with the hook name.
hooks = charmhelpers.contrib.ansible.AnsibleHooks(
    playbook_path='playbooks/site.yaml',
    default_hooks=['start', 'stop', 'config-changed',
                   'solr-relation-changed'])

@hooks.hook()
def install():
    charmhelpers.contrib.ansible.install_ansible_support(from_ppa=True)

And that’s it.

If I need something done outside of ansible, like in the install hook above, I can write a simple hook with the non-ansible setup (in this case, installing ansible), but the decorator will still ensure all the sections of the playbook tagged by the hook-name (in this case, ‘install’) are applied once the custom hook function finishes. All the other hooks (start, stop, config-changed and solr-relation-changed) are registered so that ansible will run the tagged sections automatically on those hooks.

Why am I excited about this? Because it means that practically everything related to ensuring the state of the machine is now handled by ansibles yaml declarations (and I trust those to do what I declare). Of coures those playbooks could themselves get quite large and hard to maintain, but ansible has plenty of ways to break up declarations into includes and roles.

It also means that I need to write and maintain fewer unit-tests – in the above example I need to ensure that when the install() hook is called that ansible is installed, but that’s about it. I no longer need to unit-test the code which creates directories and users, ensures permissions etc., or even calls out to relevant charm-helper functions, as it’s all instead declared as part of the machine state. That said, I’m still just as dependent on integration testing to ensure the started state of the machine is what I need.

I’m pretty sure that ansible + juju has even more possibilities for being able to create extensible charms with plugins (using roles), rather than forcing too much into the charms config.yaml, and other benefits… looking forward to trying it out!

[1] The merge proposal still needs to be reviewed, possibly updated and landed đŸ™‚

Written by Michael

November 8, 2013 at 11:42 am

Posted in Uncategorized

Tagged with ,

Easier juju charms with Python helpers

with one comment

logo-jujuHave you ever wished you could just declare the installed state of your juju charm like this?

deploy_user:
    group.present:
        - gid: 1800
    user.present:
        - uid: 1800
        - gid: 1800
        - createhome: False
        - require:
            - group: deploy_user

exampleapp:
    group.present:
        - gid: 1500
    user.present:
        - uid: 1500
        - gid: 1500
        - createhome: False
        - require:
            - group: exampleapp


/srv/{{ service_name }}:
    file.directory:
        - group: exampleapp
        - user: exampleapp
        - require:
            - user: exampleapp
        - recurse:
            - user
            - group


/srv/{{ service_name }}/{{ instance_type }}-logs:
    file.directory:
        - makedirs: True

While writing charms for Juju a long time ago, one of the things that I struggled with was testing the hook code – specifically the install hook code where the machine state is set up (ie. packages installed, directories created with correct permissions, config files setup etc.) Often the test code would be fragile – at best you can patch some attributes of your module (like “code_location = ‘/srv/example.com/code'”) to a tmp dir and test the state correctly, but at worst you end up testing the behaviour of your code (ie. os.mkdir was called with the correct user/group etc.). Either way, it wasn’t fun to write and iterate those tests.

But support has improved over the past year with the charmhelpers library. And recently I landed a branch adding support for declaring saltstack states in yaml, like the above example. That means that the install hook of your hooks.py can be reduced to something like:

import charmhelpers.core.hookenv
import charmhelpers.payload.execd
import charmhelpers.contrib.saltstack


hooks = charmhelpers.core.hookenv.Hooks()


@hooks.hook()
def install():
    """Setup the machine dependencies and installed state."""
    charmhelpers.contrib.saltstack.install_salt_support()
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/dependencies.yaml')
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/installed.yaml')


# Other hooks...

if __name__ == "__main__":
    hooks.execute(sys.argv)

…letting you focus on testing and writing the actual hook functionality (like relation-set’s etc. I’d like to add some test helpers that will automatically check the syntax of the state yaml files and template rendering output, but haven’t yet).

Hopefully we can add similar support for puppet and Ansible soon too, so that the charmer can choose the tools they want to use to declare the local machine state.

A few other things that I found valuable while writing my charm:

  • Use a branch for charmhelpers – this way you can make improvements to the library as you develop and not be dependent on your changes landing straight away to deploy (thanks Sidnei – I think I just copied that idea from one of his charms). The easiest way that I found for that was to install the branch into mycharm/lib so that it’s included in both dev and when you deploy (with a small snippet in your hooks.py.
  • Make it easy to deploy your local charm from the branch… the easiest way I found was a link-test-juju-repo make target – I’m not sure what other people do here?
  • In terms of writing actual hook functionality (like relation-set events etc), I found the easiest way to develop the charm was to iterate within a debug-hook session. Something like:
    1. write new test+code then juju upgrade-charm or add-relation
    2. run the hook and if it fails…
    3. fix and test right there within the debug-hook
    4. put the code back into my actual charm branch and update the test
    5. restore the system state in debug hook
    6. then juju upgrade-charm again to ensure it works, if it fails, iterate from 3.
  • Use the built-in support of template rendering from saltstack for rendering any config files that you need.

I don’t think I’d really appreciated the beauty of what juju is doing until, after testing my charm locally together with a gunicorn charm and a solr backend, I then setup a config using juju-deployer to create a full stack with an Apache front-end, a cache load balancer for multiple squid caches, as well as a load balancer in front of potentially multiple instances of my charms wsgi app, then a back-end loadbalancer in between my app and the (multiple) solr backends… and it just works.

Written by Michael

June 24, 2013 at 2:42 pm

Posted in juju, python, testing

Testing django migrations

with 8 comments

A number of times over the past few years I’ve needed to create some quite complex migrations (both schema and data) in a few of the Django apps that I help out with at Canonical. And like any TDD fanboy, I cry at the thought of deploying code that I’ve just tested by running it a few times with my own sample data (or writing code without first setting failing tests demoing the expected outcome).

This migration test case helper has enabled me to develop migrations test first:

class MigrationTestCase(TransactionTestCase):
    """A Test case for testing migrations."""

    # These must be defined by subclasses.
    start_migration = None
    dest_migration = None
    django_application = None

    def setUp(self):
        super(MigrationTestCase, self).setUp()
        migrations = Migrations(self.django_application)
        self.start_orm = migrations[self.start_migration].orm()
        self.dest_orm = migrations[self.dest_migration].orm()

        # Ensure the migration history is up-to-date with a fake migration.
        # The other option would be to use the south setting for these tests
        # so that the migrations are used to setup the test db.
        call_command('migrate', self.django_application, fake=True,
                     verbosity=0)
        # Then migrate back to the start migration.
        call_command('migrate', self.django_application, self.start_migration,
                     verbosity=0)

    def tearDown(self):
        # Leave the db in the final state so that the test runner doesn't
        # error when truncating the database.
        call_command('migrate', self.django_application, verbosity=0)

    def migrate_to_dest(self):
        call_command('migrate', self.django_application, self.dest_migration,
                     verbosity=0)
 

It’s not perfect – schema tests in particular end up being quite complicated as you need to ensure you’re working with the correct orm model when creating your test data – and you can’t use the normal factories to create your test data. But it does enable you to write migration tests like:

class MyMigrationTestCase(MigrationTestCase):

    start_migration = '0022_previous_migration'
    dest_migration = '0024_data_migration_after_0023_which_would_be_schema_changes'
    django_application = 'myapp'

    def test_schema_and_data_updated(self):
        # Test setup code

        self.migrate_to_dest()

        # Assertions

which keeps me happy. When I wrote that I couldn’t find any other suggestions out there for testing migrations. A quick search now turns up one idea from AndrĂ© (data-migrations only),  but nothing else substantial. Let me know if you’ve seen something similar or a way to improve testing of migrations.

Written by Michael

March 1, 2013 at 6:18 pm

Posted in django, python, testing

Tagged with , ,