Something-driven development

Software development thoughts around Ubuntu, Python, Golang and other tools

Using Apache2’s mod_proxy to transition traffic

with 3 comments

I was recently in the situation of wanting to transition traffic gradually from an old deployment to a new deployment. It’s a large production system, so rather than just switching the DNS entries to point at the new deployment, I wanted to be able to shift the traffic over in a couple of controlled steps.

It turns out, Apache’s mod_proxy makes this relatively straight forward. You can choose which resource for which you want to move traffic, and easily update the proportion of traffic for that resource which should go through to the new env. Might be old news to some, but not having needed this before, I was quite impressed by Apache2’s configurability:

# Pass any requests for specific-url through to the balancer (defined below)
# to transition traffic from the old to new system.
ProxyPass /myapp/specific-url/ balancer://transition-traffic/myapp/specific-url/
ProxyPassReverse /myapp/specific-url/ balancer://transition-traffic/myapp/specific-url/

# Send all other requests straight to the backend for the old system.
ProxyPass /myapp/ http://old.backend.ip:1234/myapp/
ProxyPassReverse /myapp/ http://backend.ip:1234/myapp/

# Send 50% of the traffic to the old backend, and divide the rest between the
# two new frontends.
<Proxy balancer://transition-traffic>
    BalancerMember http://old.backend.ip:1234 timeout=60 loadfactor=2
    BalancerMember http://new.frontend1.ip:80 timeout=60 loadfactor=1
    BalancerMember http://new.frontend2.ip:80 timeout=60 loadfactor=1
    ProxySet lbmethod=byrequests

Once the stats verify that the new env isn’t hitting any firewall or load issue, the loadfactor can be updated (only need to graceful apache) to ramp up traffic so that everything is hitting the new env. Of course, it adds one extra hop for serving requests, but it’s then much safer to switch the DNS entries when you *know* your new system is already handling the production traffic.


Written by Michael

July 17, 2015 at 6:37 am

Posted in Uncategorized

3 Responses

Subscribe to comments with RSS.

  1. Another neat trick is via squid.
    create cache peers to old/new, configure some acl’s such that default traffic goes to old system; and an acl that identifies a cookie/value sends to new. Put a trivial script on old/new systems at the same URL and happily flip between old and new for testing purposes. Domains, etc, everything is otherwise identical.
    So when you drop the cookie and redirect all traffic to the new system, you *know* it’ll all work.
    And you also get the advantages of having a reverse proxy cache in front of the web server(s) 🙂

    Steve McI

    July 18, 2015 at 1:20 am

    • Nice Steve. By a trivial script, do you mean something which squid queries to determine the current old/new ratio, or a bit of middleware on the app servers which includes the cookie/value in the response so that future requests will get routed whichever way you want?

      In this case, we already have squid right behind the apache frontend, but I thought it’d be better to have just one extra hop during the transition, rather than two (or I guess you could put a squid in front). I think I also find Squid3’s config to be more error prone, but that might just be my own familiarity (or lack thereof). Part of the issue in this specific situation is that, without access to the machines, it needs to be something simple to communicate (in this case, I could just provide a diff for the vhost and request a graceful). That’ll all change as soon as the transition is finished, as I can already spin up a replica of the new deployment and automate any changes required there.

      It would be a definite advantage to control the flow via a tiny middleware which we (devs) can then control (ie. change the amount of traffic ourselves), if I understood you correctly. I’d like to look into that for some future work.


      July 18, 2015 at 3:43 am

      • Sorry, terribly slow reply. >.<
        squid – apache – middleware – db <== basically
        trivial script on the middleware box, just becomes an extra URL to be called for user acceptance testing. All it does is set or clear a named cookie.
        squid ACL to select on that cookie:

        cache_peer …. newpeer
        cache_peer …. oldpeer
        acl COOKIESWITCHER req_header Cookie COOKIESWITCHER=new
        cache_peer_access newpeer allow COOKIESWITCHER
        cache_peer_access oldpeer deny COOKIESWITCHER
        cache_peer_access newpeer deny all

        type of thing.

        In our case, it's a manual step, but there's no reason the setting of that cookie couldn't be controlled within the app proper.

        Steve McI

        July 23, 2015 at 8:02 am

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: