Archive for the ‘django’ Category
A number of times over the past few years I’ve needed to create some quite complex migrations (both schema and data) in a few of the Django apps that I help out with at Canonical. And like any TDD fanboy, I cry at the thought of deploying code that I’ve just tested by running it a few times with my own sample data (or writing code without first setting failing tests demoing the expected outcome).
This migration test case helper has enabled me to develop migrations test first:
class MigrationTestCase(TransactionTestCase): """A Test case for testing migrations.""" # These must be defined by subclasses. start_migration = None dest_migration = None django_application = None def setUp(self): super(MigrationTestCase, self).setUp() migrations = Migrations(self.django_application) self.start_orm = migrations[self.start_migration].orm() self.dest_orm = migrations[self.dest_migration].orm() # Ensure the migration history is up-to-date with a fake migration. # The other option would be to use the south setting for these tests # so that the migrations are used to setup the test db. call_command('migrate', self.django_application, fake=True, verbosity=0) # Then migrate back to the start migration. call_command('migrate', self.django_application, self.start_migration, verbosity=0) def tearDown(self): # Leave the db in the final state so that the test runner doesn't # error when truncating the database. call_command('migrate', self.django_application, verbosity=0) def migrate_to_dest(self): call_command('migrate', self.django_application, self.dest_migration, verbosity=0)
It's not perfect - schema tests in particular end up being quite complicated as you need to ensure you're working with the correct orm model when creating your test data - and you can't use the normal factories to create your test data. But it does enable you to write migration tests like:
class MyMigrationTestCase(MigrationTestCase): start_migration = '0022_previous_migration' dest_migration = '0024_data_migration_after_0023_which_would_be_schema_changes' django_application = 'myapp' def test_schema_and_data_updated(self): # Test setup code self.migrate_to_dest() # Assertions
which keeps me happy. When I wrote that I couldn't find any other suggestions out there for testing migrations. A quick search now turns up one idea from André (data-migrations only), but nothing else substantial. Let me know if you've seen something similar or a way to improve testing of migrations.
After experimenting with juju and puppet the other week, I wanted to see if it was possible to create a generic juju charm for deploying any Django apps using Apache+mod_wsgi together with puppet manifests wherever possible. The resulting apache-django-wsgi charm is ready to demo (thanks to lots of support from the #juju team), but still needs a few more configuration options. The charm currently:
- Enables the user to specify a branch of a Python package containing the Django app/project for deploy. This python package will be `python setup.py install`’d on the instance, but it also
- Enables you to configure extra debian packages to be installed first so that your requirements can be installed in a more reliable/trusted manner, along with the standard required packages (apache2, libapache2-mod-wsgi etc.). Here’s the example charm config used for apps.ubuntu.com,
- Creates a django.wsgi and httpd.conf ready to serve your app, automatically collecting all the static content of your installed Django apps to be served separately from the same Apache virtual host,
- When it receives a database relation change, it creates some local settings, overriding the database settings of your branch, sync’s and migrates the database (a noop if it’s the second unit) and restarts apache (See the database_settings.pp manifest for more details).
Here’s a quick demo which puts up a postgresql unit and two app servers with these commands:
$ juju deploy --repository ~/charms local:postgresql $ juju deploy --config ubuntu-app-dir.yaml --repository ~/apache-django-wsgi/ local:apache-django-wsgi $ juju add-relation postgresql:db apache-django-wsgi $ juju add-unit apache-django-wsgi
Things that I think need to be improved or I'm uncertain about:
- `gem install puppet-module` is included in the install hook (a 3rd way of installing something on the system :/). I wanted to use the vcsrepo puppet module to define bzr resource types and puppet-module-tool seems to be the way to install 3rd-party puppet modules. Using this resource-type enables a simple initial_state.pp manifest. Of course, it'd be great to have 'necessary' tools like that in the archive instead.
- The initial_state.pp manifest pulls the django app package to /home/ubuntu/django-app-branch and then pip installs it on the system. Requiring the app to be a valid python package seemed sensible (in terms of ensuring it is correctly installed with its requirements satisfied) while still allowing the user to go one step further if they like and provide a debian package instead of a python package in a branch (which I assume we would do ultimately for production deploys?)
- Currently it's just a very simple apache setup. I think ideally the static file serving should be done by a separate unit in the charm (ie. an instance running a stripped down apache2 or lighttpd). Also, I would have liked to have used an 'official' or 'blessed' puppet apache module to benefit from someone else's experience, but I couldn't see one that stood out as such.
- Currently the charm assumes that your project contains the configuration info (ie. a settings.py, urls.py etc.), of which the database settings can be simply overridden for deploy. There should be an additional option to specify a configuration branch (and it shouldn't assume that you're using django-configglue), as well as other options like django_debug, static_url etc.
- The charm should also export an interface (?) that can be used by a load balancer charm.
What is the dream setup for developing and deploying Django apps? I’m looking for a solution that I can use consistently to deploy apps to servers where I may or may not have the ability to install system packages, or where I might need my app temporarily to use a newer version of a system-installed package while giving other apps running on the same server breathing space to update (think: updating a system-installed Django package on a server running four independent apps).
Specifically, the goals I have for this scenario are:
- It should be easy to use for both development and deployment (using standard tools and locations so developers don’t need to learn the environment),
- Updating any virtualenv environment should be automatic, but transparent (ie. if the pip requirements.txt changes, the next time I run tests, devserver or deployed server, it’ll automatically ensure the virtualenv is correct),
- I shouldn’t have to wait unnecessarily for virtualenvs to be created (ie. if I make a change to the requirements to try a new version of a package, and then change it back, I don’t want to re-create the original virtualenv). Similarly, if I revert a deployment to a previous version, the previous virtualenv should still be available.
- For deployment, the virtualenv shouldn’t unnecessarily replace system python packages, but allow this as an option (ie. not a –no-site-packages virtualenv).
There are a lot of virtualenv/fabric posts out there for both development and deployment, and using a SHA of the requirements.txt seems an obvious way to go. What I ended up with for my project was this develop and deploy with virtualenv snippet which so far is working quite well (although I’m yet to try a deploy where I override system packages). If the deployed version is using virtualenv exclusively, the requirements.txt file can be shared, but otherwise it would just be a matter of including the requirements.txt for the deploy with the other configuration data (settings.py etc.).
If you can see any reasons why this is not a good idea, or improvements, please let me know!