skip to content

Under the hood of Where Do I Vote

None

Our blog posts tend to focus on talking about objectives and outcomes rather than implementation details, but in order to handle the challenges of a general election our websites make use of some interesting performance optimisations. For the more technically-minded, Democracy Club developer Chris Shaw provides a glimpse under the hood of Where Do I Vote.

All of our sites get most of their traffic around elections and are pretty quiet the rest of the time, but Where Do I Vote? is the site with the most extreme traffic profile. Whereas people are interested in researching candidates or uploading leaflets in the days and weeks leading up to an election, Where Do I Vote? sees almost all of its traffic on polling day or the evening before.

In fact, our traffic graph looks roughly like this:

Where Do I Vote traffic graph

This unique (and somewhat stressful) load profile gives us some challenges:

  • Because the site has such a limited shelf-life, the site must be reliable and responsive on polling day. If it goes down on polling day and we fix it the day after, we missed the boat.
  • Although we know when the traffic will arrive, usage ramps up quickly so we have to be able to add additional capacity rapidly.

On polling day (8 June 2017), we processed about 1.2 million polling station searches, so let’s have a look at some of the techniques we use to ensure we can scale Where Do I Vote? and keep the site responsive on polling day.

Optimisation begins with profiling. One of the first things that profiling told us was that caching responses was not going to be an effective technique due to the cardinality of the data. Caching is most useful in situations where a relatively small amount of content is accessed very frequently with a ‘long tail’ of less frequently accessed content. Where Do I Vote? has almost the opposite profile with a large amount of content (over a million distinct URLs), all of which have an essentially uniform chance of being requested. This would lead us to implementing either a small dynamically populated cache with a very poor hit rate, or a huge statically warmed cache with enormous problems and overhead around populating and invalidating that cache.

In terms of how our data is accessed, Where Do I Vote? is a database-heavy application: Processing each request may involve several geographic/spatial queries. Unlike a more traditional CRUD application, the core polling station finder functionality only performs read queries. All of the data about polling stations and the registered addresses assigned to them is loaded in using command line scripts. The only write operations performed via the website are:

  • Collecting user feedback from a “did you find this useful?” form
  • Collecting email addresses from a mailing list signup form
  • Access logs

This profile gives us a number of properties we can use to operate at scale:

  • We perform lots of read operations but relatively few write operations.
  • The read operations are complex but the write operations are simple.
  • The reads and writes are completely independent. None of the write operations we perform (storing logs, user feedback and email signups) affect where anybody’s polling station is.
  • Because the shelf-life of the data is short, we can load in the polling station/district/address data and it doesn’t need to change (much)

One of our first thoughts was that this profile would lend itself well to a master/slave replication setup. We could run the web application on multiple front-end servers and spread the database access over multiple back-end servers by performing all of the write operations on a single master node. One of a number of read-only replica nodes could serve the read requests.

This setup would have a couple of drawbacks though. We need to be able to add capacity quickly and we have a reasonably large database (~30Gb). If we needed to add an additional node (or several) on polling day, performing a transaction-log-based replication would take some time to complete. Also the replication process would put extra load on the database cluster while we’re doing it.

To alleviate this, we adopted a slightly different architecture which takes advantage of the fact that there are no dependencies between the read and write operations we perform. Each front-end server is instantiated from an image which also hosts its own read-only database instance with the necessary address and polling station data pre-imported. The (low-volume) write operations are performed over a separate database connection on a single shared database instance. The local read-only databases and the shared writeable database server don’t communicate with each other. Django’s database routers make it fairly easy to transparently manage multiple database connections in our application.

This provides several advantages:

  • Because each web server has a dedicated database server, our read DB capacity scales linearly with the number of front-end servers and adding additional web servers won’t cause us to bottleneck on (read) database capacity. If we add 10 web servers to the cluster, we’ve got 10 database servers available to service them.
  • We can add additional capacity almost instantly. There is no need to wait for a large database to replicate to a new replica node. We boot a server from the image and it is ready to go.
  • Adding more database nodes doesn’t put the cluster under additional load so we can add capacity quickly even if we’re already experiencing a lot of traffic.

Engineering is all about tradeoffs, so inevitably this approach has some costs:

The main downside of this approach is that we have introduced some complexity into the process of building a server image. In order to avoid a situation where we have to completely rebuild our database to change one line of code, we build a server image in multiple stages with each stage using the image assembled at the previous stage of the build as a base. This allows us to deploy code changes without re-importing all the data by only running the final stage of the build where we add the application code to the image. As with all Democracy Club projects, we make the code available so if you’re interested in checking out the ansible playbooks and packer builds we use to build and deploy the images, they’re available on GitHub.

Additionally, because each web server maintains its own read-only database, if we want to make a change to the data we need to either build and deploy a new image or apply the same change across multiple copies of the database (whereas with a replication cluster this would be handled for us). Fortunately managing our deployment using ansible makes it fairly easy for us to apply an ad-hoc fix across multiple servers and the short shelf-life of the data means there is only a small window in which we need to deploy changes in this way.

As with anything, there is always room for improvement. Although our read capacity now scales linearly with the number of front-ends, all our writes still occur on a single database server. While we didn’t hit this limitation on polling day 2017, eventually this single database server becomes a bottleneck on how many front-end servers we can scale the application out to. For future elections, we plan to increase our database throughput by writing to a cache first. At the moment, every time we want to write a record to the database, we issue an INSERT statement. Writing the data to a cache first and then bulk-inserting the data from the cache to the database on a schedule will massively increase the throughput of the database server handling the writes because it is much faster to do one INSERT that writes 100 records than 100 INSERTs each inserting one record.

So what should you take away from this? Scale your application using non-clustered read-only database replicas? No. That is just what worked for us. The combination of techniques we have chosen to use (and the techniques we’ve chosen to avoid) are a result of understanding:

  • The demands of our application architecture
  • The nature of the traffic profile we need to accommodate
  • The tradeoffs we’re willing to make to achieve our desired outcome

Those characteristics are going to be different for every project, but understanding these is an essential step in making the correct choices.

Get in touch:

Jump into the online chat in Slack, tweet us, or email hello@democracyclub.org.uk.