Posts Tagged ‘juju’
Just documenting for later (and for a friend and colleague who needs it now) – my notes for setting up openstack swift using juju. I need to go back and check whether keystone is required – I initially had issue with the test auth so switched to keystone.
First, create the config file to use keystone, local block-devices on the swift storage units (ie. no need to mount storage), and using openstack havana:
cat >swift.cfg <<END swift-proxy: zone-assignment: auto replicas: 3 auth-type: keystone openstack-origin: cloud:precise-havana/updates swift-storage: zone: 1 block-device: /etc/swift/storagedev1.img|2G openstack-origin: cloud:precise-havana/updates keystone: admin-token: somebigtoken openstack-origin: cloud:precise-havana/updates END
Deploy it (this could probably be replaced with a charm bundle?):
juju deploy --config=swift.cfg swift-proxy
juju deploy --config=swift.cfg --num-units 3 swift-storage
juju add-relation swift-proxy swift-storage
juju deploy --config=swift.cfg keystone
juju add-relation swift-proxy keystone
Once everything is up and running, create a tenant and user with the user having admin rights for the tenant (using your keystone unit’s IP address for keystone-ip). Note, below I’m using the names of tenant, user and role – which works with keystone 0.3.2, but apparently earlier versions require you to use the uuids instead. Check with `keystone help user-role-add`).
$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken tenant-create --name mytenant
$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken user-create --name myuser --tenant mytenant --pass userpassword
$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken user-role-add --tenant mytenant --user myuser --role Admin
And finally, use our new admin user to create a container for use in our dev environment (specify auth version 2):
$ export OS_REGION_NAME=RegionOne
$ export OS_TENANT_NAME=mytenant
$ export OS_USERNAME=myuser
$ export OS_PASSWORD=userpassword
$ export OS_AUTH_URL=http://keystone-ip:5000/v2.0/
$ swift -V 2 post mycontainer
If you want the container to be readable without auth:
$ swift -V 2 post mycontainer -r '.r:*'
If you want another keystone user to have write access:
$ swift -V 2 post mycontainer -w mytenant:otheruser
Verify that the container is ready for use:
$ swift -V 2 stat mycontainer
Please let me know if you spot any issues (these notes are from a month or two ago, so I haven’t just tried this).
# Create the hooks helper, passing a list of hooks which will be # handled by default by running all sections of the playbook # tagged with the hook name. hooks = charmhelpers.contrib.ansible.AnsibleHooks( playbook_path='playbooks/site.yaml', default_hooks=['start', 'stop', 'config-changed', 'solr-relation-changed']) @hooks.hook() def install(): charmhelpers.contrib.ansible.install_ansible_support(from_ppa=True)
And that’s it.
If I need something done outside of ansible, like in the install hook above, I can write a simple hook with the non-ansible setup (in this case, installing ansible), but the decorator will still ensure all the sections of the playbook tagged by the hook-name (in this case, ‘install’) are applied once the custom hook function finishes. All the other hooks (start, stop, config-changed and solr-relation-changed) are registered so that ansible will run the tagged sections automatically on those hooks.
Why am I excited about this? Because it means that practically everything related to ensuring the state of the machine is now handled by ansibles yaml declarations (and I trust those to do what I declare). Of coures those playbooks could themselves get quite large and hard to maintain, but ansible has plenty of ways to break up declarations into includes and roles.
It also means that I need to write and maintain fewer unit-tests – in the above example I need to ensure that when the install() hook is called that ansible is installed, but that’s about it. I no longer need to unit-test the code which creates directories and users, ensures permissions etc., or even calls out to relevant charm-helper functions, as it’s all instead declared as part of the machine state. That said, I’m still just as dependent on integration testing to ensure the started state of the machine is what I need.
I’m pretty sure that ansible + juju has even more possibilities for being able to create extensible charms with plugins (using roles), rather than forcing too much into the charms config.yaml, and other benefits… looking forward to trying it out!
 The merge proposal still needs to be reviewed, possibly updated and landed 🙂
I’ve been playing with juju for a few months now in different contexts and I’ve really enjoyed the ease with which it allows me to think about services rather than resources.
More recently I’ve started thinking about best-practices for deploying services using juju, while still using puppet to setup individual units. As a simple experiment, I wrote a juju charm to deploy an irssi service  to dig around. Here’s what I’ve found so far . The first is kind of obvious, but worth mentioning:
Install hooks can be trivial:
#!/bin/bash sudo apt-get -y install puppet juju-log "Initialising machine state." puppet apply $PWD/hooks/initial_state.pp
Normally the corresponding manifest (see initial_state.pp) would be a little more complicated, but in this example it’s hardly worth mentioning.
Juju config changes can utilise Puppet’s Facter infrastructure:
This enables juju config options to be passed through to puppet, so that config-changed hooks can be equally simple:
#!/bin/bash juju-log "Getting config options" username=`config-get username` public_key=`config-get public_key` juju-log "Configuring irssi for user" # We specify custom facts so that they're accessible in the manifest. FACTER_username=$username FACTER_public_key=$public_key puppet apply $PWD/hooks/configured_state.pp
In this example, it is the configured state manifest that is more interesting (see configured_state.pp). It adds the user to the system, sets up byobu with an irssi window ready to go, and adds the given public ssh key enabling the user to login.
The same would go for other juju hooks (db-relation-changed etc.), which is quite neat – getting the best of both worlds: the charm user can still think in terms of deploying services, while the charm author can use puppets declarative syntax to define the machine states.
Next up: I hope to experiment with an optional puppet master for a real project (something simple like the Ubuntu App directory), so that
- a project can be deployed without the (probably private) puppet-master to create a close-to-production environment, while
- configuring a puppet-master in the juju config would enable production deploys (or deploys of exact replicas of production to a separate environment for testing).
If you’re interested in seeing the simple irssi charm, the following 2min video demos:
# Deploy an irssi service $ juju deploy --repository=/home/ubuntu/mycharms local:oneiric/irssi # Configure it so a user can login $ juju set irssi username=michael public_key=AAAA... # Login to find irssi already up and running in a byobu window $ ssh firstname.lastname@example.org
and the code is on Launchpad.
 Yes, irssi is not particularly useful as a juju service (as I don’t want multiple units, or relating it to other services etc.), but it suited my purposes for a simple experiment that also automates something I can use for working in the cloud.
 I’m not a puppet or juju expert, so if you’ve got any comments or improvements, don’t hesitate.