Something-driven development

Software development thoughts around Ubuntu, Python, Golang and other tools

Creating a Mir Snap in the cloud

leave a comment »

I tend to do all my work in the cloud (and work physically from a useful but comparatively powerless chromebook running Ubuntu). So creating a Mir Snap on my local machine wasn’t an option I was keen to try.

Mainly for my own documentation, the first attempt at creating a Mir snap (a display server for experimenting with Kodi on Ubuntu Snappy), went like this:

First spin up an Ubuntu Wily development instance in a(ny) openstack cloud (I’m using a Canonical internal one):

$ nova boot --flavor cpu2-ram4-disk100-ephemeral20 --image 1461273c-af73-4839-9f64-3df00446322a --key-name ******** wily_dev

Less than a minute later, ssh in and use the existing snappy packaging branch for mir:


$ sudo add-apt-repository ppa:snappy-dev/tools
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install snappy-tools bzr cmake gcc-multilib ubuntu-snappy-cli mir-demos mir-graphics-drivers-desktop dpkg-dev

$ bzr branch lp:~mir-team/mir/snappy-packaging && cd snappy-packaging && make

The last step uses deb2snap to create the snap package from the installed deb packages. So, the (76M) snap is for the amd64 architecture – as the instance I created is amd64:


$ ls -lh mir_snap1_amd64.snap
-rw-rw-r-- 1 ubuntu ubuntu 76M Jun 21 11:20 mir_snap1_amd64.snap

Next up… using either QEMU or an ARM cloud instance (I’m not sure that the latter is available) to create an ARM Mir snap for my Raspberry Pi, and testing it out…

Written by Michael

June 21, 2015 at 1:25 pm

Posted in Uncategorized

A Kodi (XBMC) snap for the RaspberryPi?

leave a comment »

orange-matchboxI just received an Orange Matchbox today – a Raspberry Pi 2B with a PiGlow and Ubuntu-engraved casing – to try out Snappy Ubuntu.

As a first project, I want to see if it’s possible to get Kodi (formally XBMC) packaged as a snap. I’m not yet sure if this is possible, but the plan is to install Ubuntu on a separate microSD and setup deb2snap and see whether I can create a snap (minus certain dependencies). Obviously then install the snap on my snappy microSD and profit! (The devel’s in the details of course.)

I’m not sure that deb2snap will work with Ubuntu Trusty (the default image for RPi2), as it was recently announced that XMir will no longer be supported for anything less than Wily. We’ll see how it goes.

I already found and reported my first issue while testing snappy on a VM last week, which got fixed for todays stable release. Let the fun begin…

Written by Michael

June 11, 2015 at 9:14 am

Posted in Uncategorized

A walk-through charming an existing wsgi application

with 2 comments

After recently writing the re-usable wsgi-app role for juju charms, I wanted to see how easy it would be to apply this to an existing service. I chose the existing ubuntu-reviews service because it’s open-source and something that we’ll probably need to deploy with juju soon (albeit only the functionality for reviewing newer click applications).

I’ve tried to include the workflow below that I used to create the charm including the detailed steps and some mistakes, in case it’s useful for others. If you’re after more of an overview about using reusable ansible roles in your juju charms, checkout these slides.

1. Create the new charm from the bootstrap-wsgi charm

First grab the charm-bootstrap-wsgi code and rename it to ubuntu-reviews:

$ mkdir -p ~/charms/rnr/precise && cd ~/charms/rnr/precise
$ git clone https://github.com/absoludity/charm-bootstrap-wsgi.git
$ mv charm-bootstrap-wsgi ubuntu-reviews && cd ubuntu-reviews/

Then update the charm metadata to reflect the ubuntu-reviews service:

--- a/metadata.yaml
+++ b/metadata.yaml
@@ -1,6 +1,6 @@
-name: charm-bootstrap-wsgi
-summary: Bootstrap your wsgi service charm.
-maintainer: Developer Account <Developer.Account@localhost>
+name: ubuntu-reviews
+summary: A review API service for Ubuntu click packages.
+maintainer: Michael Nelson <michael.nelson@canonical.com>
 description: |
   <Multi-line description here>
 categories:

I then updated the playbook to expect a tarball named rnr-server.tgz, and gave it a more relevant app_label (which controls the directories under which the service is setup):

--- a/playbook.yml
+++ b/playbook.yml
@@ -2,13 +2,13 @@
 - hosts: localhost
 
   vars:
-    - app_label: wsgi-app.example.com
+    - app_label: click-reviews.ubuntu.com
 
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: example_wsgi:application
-      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
+      wsgi_application: wsgi:app
+      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''

Although when deploying this service we’d be deploying a built asset published somewhere else, for development it’s easier to work with a local file in the charm – and the reusable wsgi-app role allows you to switch between those two options. So the last step here is to add some Makefile targets to enable pulling in the application code and creating the tarball (you can see the details on the git commit for this step).

With that done, I can deploy the charm with `make deploy` – expecting it to fail because the wsgi object doesn’t yet exist.

Aside: if you’re deploying locally, it’s sometimes useful to watch the deployment progress with:

$ tail -F ~/.juju/local/log/unit-ubuntu-reviews-0.log

At this point, juju status shows no issues, but curling the service does (as expected):

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080"
curl: (56) Recv failure: Connection reset by peer

Checking the logs (which, oh joy, are already setup and configured with log rotation etc. by the wsgi-app role) shows the expected error:

$ juju ssh ubuntu-reviews/0 "tail -F /srv/reviews.click.ubuntu.com/logs/reviews.click.ubuntu.com-error.log"
...
ImportError: No module named wsgi

2. Adding the wsgi, settings and urls

The current rnr-server code does have a django project setup, but it’s specific to the current setup (requiring a 3rd party library for settings, and serves the non-click reviews service too). I’d like to simplify that here, so I’m bundling a project configuration in the charm. First the standard (django-generated) manage.py, wsgi.py and near-default settings.py (which you can see in the actual commit).

An extra task is added which copies the project configuration into place during install/upgrade, and both the wsgi application location and the required PYTHONPATH are specified:

--- a/playbook.yml
+++ b/playbook.yml
@@ -7,7 +7,8 @@
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: wsgi:app
+      python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
+      wsgi_application: clickreviewsproject.wsgi:application
       code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
@@ -18,20 +19,18 @@
 
   tasks:
 
-    # Pretend there are some package dependencies for the example wsgi app.
     - name: Install any required packages for your app.
-      apt: pkg={{ item }} state=latest update_cache=yes
+      apt: pkg={{ item }}
       with_items:
-        - python-django
-        - python-django-celery
+        - python-django=1.5.4-1ubuntu1~ctools0
       tags:
         - install
         - upgrade-charm
 
     - name: Write any custom configuration files
-      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
+      copy: src=django-project dest={{ application_dir }} owner={{ wsgi_user }} group={{ wsgi_group }}
       tags:
-        - config-changed
-        # Also any backend relation-changed events, such as databases.
+        - install
+        - upgrade-charm
       notify:
         - Restart wsgi

I can now run juju upgrade-charm and see that the application now responds, but with a 500 error highlighting the first (of a few) missing dependencies…

3. Adding missing dependencies

At this point, it’s easiest in my opinion to run debug-hooks:

$ juju debug-hooks ubuntu-reviews/0

and directly test and install missing dependencies, adding them to the playbook as you go:

ubuntu@michael-local-machine-1:~$ curl http://localhost:8080 | less
ubuntu@michael-local-machine-1:~$ sudo apt-get install python-missing-library -y (and add it to the playbook in your charm editor).
ubuntu@michael-local-machine-1:~$ sudo service gunicorn restart

Rinse and repeat. You may want to occasionally run upgrade-charm in a separate terminal:

$ juju upgrade-charm --repository=../.. ubuntu-reviews

to verify that you’ve not broken things, running the hooks as they are triggered in your debug-hooks window.

Other times you might want to destroy the environment to redeploy from scratch.

Sometimes you’ll have dependencies which are not available in the distro. For our deployments,
we always ensure we have these included in our tarball (in some form). In the case of the reviews server, there was an existing script which pulls in a bunch of extra branches, so I’ve updated to use that in the Makefile target that builds the tarball. But there was another dependency, south, which wasn’t included by that task, so for simplicity here, I’m going to install that via pip (which you don’t want to do in reality – you’d update your build process instead).

You can see the extra dependencies in the commit.

4. Customise settings

At this point, the service deploys but errors due to a missing setting, which makes complete sense as I’ve not added any app-specific settings yet.

So, remove the vanilla settings.py and add a custom settings.py.j2 template, and add an extra django_secret_key option to the charm config.yaml:

--- a/config.yaml
+++ b/config.yaml
@@ -10,8 +10,13 @@ options:
     current_symlink:
         default: "latest"
         type: string
-        description: |
+        description: >
             The symlink of the code to run. The default of 'latest' will use
             the most recently added build on the instance.  Specifying a
             differnt label (eg. "r235") will symlink to that directory assuming
             it has been previously added to the instance.
+    django_secret_key:
+        default: "you-really-should-set-this"
+        type: string
+        description: >
+            The secret key that django should use for each instance.

--- /dev/null
+++ b/templates/settings.py.j2
@@ -0,0 +1,41 @@
+DEBUG = True
+TEMPLATE_DEBUG = DEBUG
+SECRET_KEY = '{{ django_secret_key }}'
+TIME_ZONE = 'UTC'
+ROOT_URLCONF = 'clickreviewsproject.urls'
+WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
+INSTALLED_APPS = (
+    'django.contrib.auth',
+    'django.contrib.contenttypes',
+    'django.contrib.sessions',
+    'django.contrib.sites',
+    'django_openid_auth',
+    'django.contrib.messages',
+    'django.contrib.staticfiles',
+    'clickreviews',
+    'south',
+)
...

Again, full diff in the commit.

With the extra settings now defined and one more dependencie added, I’m now able to ping the service successfully:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

But that only works because the input validation is fired before anything tries to use the (non-existent) database…

5. Adding the database relation

I had expected this step to be simple – add the database relation, call syncdb/migrate, but there were three factors conspiring to complicate things:

  1. The reviews application uses a custom postgres “debversion” column type
  2. A migration for the older non-click reviewsapp service requires postgres superuser creds (specifically, reviews/0013_add_debversion_field.py
  3. The clickreviews app, which I wanted to deploy on its own, is currently dependent on the older non-click reviews app, so it’s necessary to have both enabled and therefore the tables from both sync’d

So to work around the first point, I split the ‘deploy’ task into two tasks so I can install and setup the custom debversion field on the postgres units, before running syncdb:

--- a/Makefile
+++ b/Makefile
@@ -37,8 +37,17 @@ deploy: create-tarball
        @juju set ubuntu-reviews build_label=r$(REVIEWS_REVNO)
        @juju deploy gunicorn
        @juju deploy nrpe-external-master
+       @juju deploy postgresql
        @juju add-relation ubuntu-reviews gunicorn
        @juju add-relation ubuntu-reviews nrpe-external-master
+       @juju add-relation ubuntu-reviews postgresql:db
+       @juju set postgresql extra-packages=postgresql-9.1-debversion
+       @echo "Once the services are related, run 'make setup-db'"
+
+setup-db:
+       @echo "Creating custom debversion postgres type and syncing the ubuntu-reviews database."
+       @juju run --service postgresql 'psql -U postgres ubuntu-reviews -c "CREATE EXTENSION debversion;"'
+       @juju run --unit=ubuntu-reviews/0 "hooks/syncdb"
        @echo See the README for explorations after deploying.

I’ve separated the syncdb out to a manual step (rather than running syncdb on a hook like config-changed) so that I can run it after the postgresql service had been updated with the custom field, and also because juju doesn’t currently have a concept of a leader (yes, syncdb could be run on every unit, and might be safe, but I don’t like it conceptually).

--- a/playbook.yml
+++ b/playbook.yml
@@ -3,13 +3,13 @@
 
   vars:
     - app_label: click-reviews.ubuntu.com
+    - code_archive: "{{ build_label }}/rnr-server.tgz"
 
   roles:
     - role: wsgi-app
       listen_port: 8080
       python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
       wsgi_application: clickreviewsproject.wsgi:application
-      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
     - role: nrpe-external-master
@@ -25,6 +25,7 @@
         - python-django=1.5.4-1ubuntu1~ctools0
         - python-tz
         - python-pip
+        - python-psycopg2
       tags:
         - install
         - upgrade-charm
@@ -46,13 +47,24 @@
       tags:
         - install
         - upgrade-charm
+        - db-relation-changed
       notify:
         - Restart wsgi
+
     # XXX Normally our deployment build would ensure these are all available in the tarball.
     - name: Install any non-distro dependencies (Don't depend on pip for a real deployment!)
       pip: name={{ item.key }} version={{ item.value }}
       with_dict:
         south: 0.7.6
+        django-openid-auth: 0.2
       tags:
         - install
         - upgrade-charm
+
+    - name: sync the database
+      django_manage: >
+        command="syncdb --noinput"
+        app_path="{{ application_dir }}/django-project"
+        pythonpath="{{ code_dir }}/current/src"
+      tags:
+        - syncdb

To work around the second and third points above, I updated to remove the ‘south’ django application so that syncdb will setup the correct current tables without using migrations (while leaving it on the pythonpath as some code imports it currently), and added the old non-click reviews app to satisfy some of the imported dependencies.

By far the most complicated part of this change though, was the database settings:

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -1,6 +1,31 @@
 DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
+
+{% if 'db' in relations %}
+DATABASES = {
+  {% for dbrelation in relations['db'] %}
+  {% if loop.first %}
+    {% set services=relations['db'][dbrelation] %}
+    {% for service in services %}
+      {% if service.startswith('postgres') %}
+      {% set dbdetails = services[service] %}
+        {% if 'database' in dbdetails %}
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ dbdetails["database"] }}',
+        'HOST': '{{ dbdetails["host"] }}',
+        'USER': '{{ dbdetails["user"] }}',
+        'PASSWORD': '{{ dbdetails["password"] }}',
+    }
+        {% endif %}
+      {% endif %}
+    {% endfor %}
+  {% endif %}
+  {% endfor %}
+}
+{% endif %}
+
 TIME_ZONE = 'UTC'
 ROOT_URLCONF = 'clickreviewsproject.urls'
 WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
@@ -12,8 +37,8 @@ INSTALLED_APPS = (
     'django_openid_auth',
     'django.contrib.messages',
     'django.contrib.staticfiles',
+    'reviewsapp',
     'clickreviews',
-    'south',
 )
 AUTHENTICATION_BACKENDS = (
     'django_openid_auth.auth.OpenIDBackend',

Why are the database settings so complicated? I’m currently rendering all the settings on any config change as well as database relation change which means I need to use the global juju state for the relation data, and as far as juju knows, there could be multiple database relations for this stack. The current charm helpers gives you this as a dict (above it’s ‘relations’). The ‘db’ key of that dict contains all the database relations, so I’m grabbing the first database relation and taking the details from the postgres side of that relation (there are two keys for the relation, one for postgres, the other for the connecting service). Yuk (see below for a better solution).

Full details are in the commit. With those changes I can now use the service:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

A. Cleanup: Simplify database settings

In retrospect, the complicated database settings are only needed because the settings are rendered on any config change, not just when the database relation changes, so I’ll clean this up in the next step by moving the database settings to a separate file which is only written on the db-relation-changed hook where we can use the more convenient current_relation dict (given that I know this service only has one database relationship).

This simplifies things quite a bit:

--- a/playbook.yml
+++ b/playbook.yml
@@ -45,11 +45,21 @@
         owner: "{{ wsgi_user }}"
         group: "{{ wsgi_group }}"
       tags:
-        - install
-        - upgrade-charm
+        - config-changed
+      notify:
+        - Restart wsgi
+
+    - name: Write the database settings
+      template:
+        src: "templates/database_settings.py.j2"
+        dest: "{{ application_dir }}/django-project/clickreviewsproject/database_settings.py"
+        owner: "{{ wsgi_user }}"
+        group: "{{ wsgi_group }}"
+      tags:
         - db-relation-changed
       notify:
         - Restart wsgi
+      when: "'database' in current_relation"
 
--- /dev/null
+++ b/templates/database_settings.py.j2
@@ -0,0 +1,9 @@
+DATABASES = {
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ current_relation["database"] }}',
+        'HOST': '{{ current_relation["host"] }}',
+        'USER': '{{ current_relation["user"] }}',
+        'PASSWORD': '{{ current_relation["password"] }}',
+    }
+}

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -2,29 +2,7 @@ DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
 
-{% if 'db' in relations %}
-DATABASES = {
-  {% for dbrelation in relations['db'] %}
-  {% if loop.first %}
-    {% set services=relations['db'][dbrelation] %}
-    {% for service in services %}
-      {% if service.startswith('postgres') %}
-      {% set dbdetails = services[service] %}
-        {% if 'database' in dbdetails %}
-    'default': {
-        'ENGINE': 'django.db.backends.postgresql_psycopg2',
-        'NAME': '{{ dbdetails["database"] }}',
-        'HOST': '{{ dbdetails["host"] }}',
-        'USER': '{{ dbdetails["user"] }}',
-        'PASSWORD': '{{ dbdetails["password"] }}',
-    }
-        {% endif %}
-      {% endif %}
-    {% endfor %}
-  {% endif %}
-  {% endfor %}
-}
-{% endif %}
+from database_settings import DATABASES
 
 TIME_ZONE = 'UTC'

With those changes, the deploy still works as expected:
$ make deploy
$ make setup-db
Creating custom debversion postgres type and syncing the ubuntu-reviews database.
...
$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

Summary

Reusing the wsgi-app ansible role allowed me to focus just on the things that make this service different from other wsgi-app services:

  1. Creating the tarball for deployment (not normally part of the charm, but useful for developing the charm)
  2. Handling the application settings
  3. Handling the database relation and custom setup

The wsgi-app role already provides the functionality to deploy and upgrade code tarballs via a config set (ie. `juju set ubuntu-reviews build_label=r156`) from a configured url. It provides the functionality for the complete directory layout, user and group setup, log rotation etc. All of that comes for free.

Things I’d do if I was doing this charm for a real deployment now:

  1. Use the postgres db-admin relation for syncdb/migrations (thanks Simon) and ensure that the normal app uses the non-admin settings (and can’t change the schema etc.) This functionality could be factored out into another reusable role.
  2. Remove the dependency on the older non-click reviews app so that the clickreviews service can be deployed in isolation.
  3. Ensure all dependencies are either available in the distro, or included in the tarball

Written by Michael

June 25, 2014 at 11:54 am

Posted in cloud automation, juju

Reusable ansible roles for Juju charms

with 3 comments

I’ve been writing Juju charms to automate the deployment of a few different services at work, which all happen to be wsgi applications… some Django apps, others with other frameworks. I’ve been using the ansible support for writing charms which makes charm authoring simpler, but even then, essentially each wsgi service charm needs to do the same thing:

  1. Setup specific users
  2. Install the built code into a specific location
  3. Install any package dependencies
  4. Relate to a backend (could be postgresql, could be elasticsearch)
  5. Render the settings
  6. Setup a wsgi (gunicorn) service (via a subordinate charm)
  7. Setup log rotation
  8. Support updating to a new codebase without upgrading the charm
  9. Support rolling updates a new codebase

Only three of the above really change slightly from service to service: which package dependencies are required, the rendering of the settings and the backend relation (which usually just causes the settings to be rerendered anyway).

After trying (and failing) to create a nice reusable wsgi-app charm, I’ve switched to utilise ansible’s built-in support for reusable roles and created a charm-bootstrap-wsgi repo on github, which demonstrates all of the above out of the box (the README has an example rolling upgrade). The charm’s playbook is very simple, just reusing the wsgi-app role:

 

roles:
    - role: wsgi-app
      listen_port: 8080
      wsgi_application: example_wsgi:application
      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
      when: build_label != ''

 

and only needs to do two things itself:

tasks:
    - name: Install any required packages for your app.
      apt: pkg={{ item }} state=latest update_cache=yes
      with_items:
        - python-django
        - python-django-celery
      tags:
        - install
        - upgrade-charm

    - name: Write any custom configuration files
      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
      tags:
        - config-changed
        # Also any backend relation-changed hooks for databases etc.
      notify:
        - Restart wsgi

 

Everything else is provided by the reusable wsgi-app role. For the moment I’ve got the source of the reusable roles in a separate github repo, but I’d like to get these into the charm-helpers project itself eventually. Of course there will be cases where the service charm may need to do quite a bit more custom functionality, but if we can encapsulate and reuse as much as possible, it’s a win for all of us.

If you’re interested in chatting about easier charms with ansible (or any issues you can see), we’ll be getting together for a hangout tomorrow (Jun 11, 2014 at 1400UTC).

Written by Michael

June 10, 2014 at 6:22 pm

Posted in ansible, juju, python

Deploying swift with juju for development

with 2 comments

Just documenting for later (and for a friend and colleague who needs it now) – my notes for setting up openstack swift using juju. I need to go back and check whether keystone is required – I initially had issue with the test auth so switched to keystone.

First, create the config file to use keystone, local block-devices on the swift storage units (ie. no need to mount storage), and using openstack havana:

cat >swift.cfg <<END
swift-proxy:
    zone-assignment: auto
    replicas: 3
    auth-type: keystone
    openstack-origin: cloud:precise-havana/updates
swift-storage:
    zone: 1
    block-device: /etc/swift/storagedev1.img|2G
    openstack-origin: cloud:precise-havana/updates
keystone:
    admin-token: somebigtoken
    openstack-origin: cloud:precise-havana/updates
END

Deploy it (this could probably be replaced with a charm bundle?):

juju deploy --config=swift.cfg swift-proxy
juju deploy --config=swift.cfg --num-units 3 swift-storage
juju add-relation swift-proxy swift-storage
juju deploy --config=swift.cfg keystone
juju add-relation swift-proxy keystone

Once everything is up and running, create a tenant and user with the user having admin rights for the tenant (using your keystone unit’s IP address for keystone-ip). Note, below I’m using the names of tenant, user and role – which works with keystone 0.3.2, but apparently earlier versions require you to use the uuids instead. Check with `keystone help user-role-add`).

$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken tenant-create --name mytenant
$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken user-create --name myuser --tenant mytenant --pass userpassword
$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken user-role-add --tenant mytenant --user myuser --role Admin

And finally, use our new admin user to create a container for use in our dev environment (specify auth version 2):

$ export OS_REGION_NAME=RegionOne
$ export OS_TENANT_NAME=mytenant
$ export OS_USERNAME=myuser
$ export OS_PASSWORD=userpassword
$ export OS_AUTH_URL=http://keystone-ip:5000/v2.0/
$ swift -V 2 post mycontainer

If you want the container to be readable without auth:

$ swift -V 2 post mycontainer -r '.r:*'

If you want another keystone user to have write access:

$ swift -V 2 post mycontainer -w mytenant:otheruser

Verify that the container is ready for use:
$ swift -V 2 stat mycontainer

Please let me know if you spot any issues (these notes are from a month or two ago, so I haven’t just tried this).

Written by Michael

January 9, 2014 at 10:54 am

Posted in Uncategorized

Tagged with , ,

Bootstrap your ansible-powered juju charm

with 4 comments

After working with InformatiQ to setup a new charm using the ansible support (and ironing out a few issues), it made sense to capture the process…

The README at charm-bootstrap-ansible has the details, but the branch will pull in the required charm-helpers library and run the tests, leaving you ready to deploy and explore.

Hopefully I can get this into the charm-create tool eventually.

Written by Michael

November 20, 2013 at 5:35 pm

Posted in bzr, juju

Juju + Ansible = simpler charms

with 7 comments

I’ve been working on some more support for ansible in the juju charm-helpers package recently [1], which has effectively transformed my juju charm’s hooks.py to something like:

# Create the hooks helper, passing a list of hooks which will be
# handled by default by running all sections of the playbook
# tagged with the hook name.
hooks = charmhelpers.contrib.ansible.AnsibleHooks(
    playbook_path='playbooks/site.yaml',
    default_hooks=['start', 'stop', 'config-changed',
                   'solr-relation-changed'])

@hooks.hook()
def install():
    charmhelpers.contrib.ansible.install_ansible_support(from_ppa=True)

And that’s it.

If I need something done outside of ansible, like in the install hook above, I can write a simple hook with the non-ansible setup (in this case, installing ansible), but the decorator will still ensure all the sections of the playbook tagged by the hook-name (in this case, ‘install’) are applied once the custom hook function finishes. All the other hooks (start, stop, config-changed and solr-relation-changed) are registered so that ansible will run the tagged sections automatically on those hooks.

Why am I excited about this? Because it means that practically everything related to ensuring the state of the machine is now handled by ansibles yaml declarations (and I trust those to do what I declare). Of coures those playbooks could themselves get quite large and hard to maintain, but ansible has plenty of ways to break up declarations into includes and roles.

It also means that I need to write and maintain fewer unit-tests – in the above example I need to ensure that when the install() hook is called that ansible is installed, but that’s about it. I no longer need to unit-test the code which creates directories and users, ensures permissions etc., or even calls out to relevant charm-helper functions, as it’s all instead declared as part of the machine state. That said, I’m still just as dependent on integration testing to ensure the started state of the machine is what I need.

I’m pretty sure that ansible + juju has even more possibilities for being able to create extensible charms with plugins (using roles), rather than forcing too much into the charms config.yaml, and other benefits… looking forward to trying it out!

[1] The merge proposal still needs to be reviewed, possibly updated and landed :)

Written by Michael

November 8, 2013 at 11:42 am

Posted in Uncategorized

Tagged with ,

Follow

Get every new post delivered to your Inbox.

Join 105 other followers