lloyd.io is the personal website of Lloyd Hilaiel, a software engineer who works for Team Ozlo and lives in Denver.

All the stuff you'll find here is available under a CC BY-SA 3.0 license (use it and change it, just don't lie about who wrote it). Icons on this site are commercially available from steedicons.com. Fonts used are available in Google's Web Font directory, and I'm using Ubuntu and Lekton. Finally, Jekyll is used for site rendering.

Finally, Atul, Pascal, and Stephen inspired the site's design. And in case you're interested, this site's code is available on github.

An Even Higher Availability Persona
2013-04-12 00:00:00 -0700

Mozilla Persona is a open authentication system for the web that will eliminate per-site passwords. I work on the team that supports Persona, and this post will describe how we will accomplish a uber-high availability deployment of the persona service with server deployments on every continent. The ultimate goal is fantastic availability, extremely low latency worldwide, and to preserve our tradition of zero downtime updates.

Persona's Current Deployment Architecture

At present, Persona is supported by two redundant data centers. The following diagram gives you a high level idea of how this works:

Deployment Today

What a single data center looks like.

Note that for the purposes of this post, I'm abstracting away from the deployment architecture of a single data center. You can get a slightly better idea of how a single DC is organized in a previous post. The key points are, inside every data center the following logical server tiers exist:

Inter-colo communication

A key view into the deployment architecture is what data actually has to travel cross-colo in order to support multiple deployments in geographically distinct regions. At present, given the distributed nature of the protocol which persona implements, the only required inter-datacenter traffic is for the database. Currently, we run a single master setup, which means:

Splitting traffic

At present during normal operation we use DynECT, a managed DNS provider, to split traffic between two colocation facilities using DNS load balancing. What this means is that a 30 second TTL response to DNS queries is sent and routes traffic to one of the two facilities.

Handling disaster

When disaster strikes, it may take one of two general forms:

Non-Fatal system failure inside a colo: All of the tiers listed above have an IP load balancer that is constantly checking for system health. If any node in these tiers fails, then health checks fail, and the system is removed from rotation. Our operational team is paged, and they get to repairing the problem.

Fatal colocation facility failure: This includes hardware failure that affects a critical, non-redundant piece of infrastructure inside a colo. This could be a load balancer, it could be the database write master, or a number of other things. The response here is to disable the entire data center in DNS, and repair the problem. Much like the IP load balancers, DynECT uses health checks to automatically stop sending traffic to a given DC in this scenario. If the downed DC hosts the current write master, we have a manual procedure to promote a new master in the remaining DC and restore service.

Update Procedure

Currently, to release updates, we:

  1. Take a DC out of DNS rotation
  2. update the DC
  3. test the DC
  4. switch traffic to the new version of the service via DNS
  5. test and monitor
  6. update the second DC
  7. add the second DC into rotation

Two requirements make this procedure work. First, we never make a database change that isn't backwards compatible. If this means phasing features over several updates, this is what we do. This requirement allows us to always have a rollback option (updating the service twice a month for two years now, we've rolled back half a dozen times). Second, we assume that frontend HTML code always interacts with a backend of the same version. This allows us to not worry (for the most part) about version compatibility in our (internal) API, which accelerates development. This second requirement will have to change on the road to higher availability.

Weaknesses

The key weaknesses in this current deployment include:

Persona's Forthcoming Deployment Architecture

In an attempt to address the weaknesses discussed above, we'll be migrating Persona to Amazon Web Services. This gives us the ability to land a deployment on almost every continent and move from 2 datacenters to about 8. This will require changes to our technology and implementation, detailed below. But when done, it'll look like this:

Deployment Tomorrow

The key differences here are we'll be running in many more data centers, and we'll be leveraging auto-scaling to be able to handle arbitrary load.

How this changes things.

With respect to the sections above describing our current deployment architecture, some things will change, and some things will not:

We'll still split traffic using DNS mechanisms, but we'll add geographic intelligence to our DNS routing.

The only inter-datacenter communication will still be the one and only database. We are fighting hard to introduce no new systems that bring additional communication requirements.

When disaster strikes, the affected data centers will still be automatically removed from DNS while we diagnose and repair. Because we'll have greater coverage and because simply removing a DC from rotation is a simple and fast process which reduces user impact, we'll be able to eliminate user impact faster, and there will be less time pressure on resolution.

To achieve this scale, we'll change our deployment procedure. It is no longer viable to expect that we can switch traffic in a single sweep, and we need the benefits of rolling updates.

Let's spend a moment digging into the technical challenges that face us as we make this transition .

Database Technology

Persona has a trivial database schema. Server persistence requirements are tiny given careful architectural decisions we've made along the way, our commitment to privacy (we store the minimal possible amount), and the design of the protocol. This is excellent as it makes the database challenge tractable.

We must leverage these properties and move away from a single-master setup. There are plenty of available distributed data stores that ensure eventual consistency - which could fit extremely well in Persona. When you also consider that given the way the service is built, there are fairly easy data synchronization requirements, we have a lot of ways to solve this problem.

To set a concrete goal, we need to move to a database setup that runs well distributed in ten different geographic locations and can continue to run if half of those locations abruptly go away.

Rolling Updates

To support Even Higher Availability, we must ensure that version N of the service interoperate with version N-1. This means that we must phase changes to our internal API so that we deploy features which require a new internal API in two deployments.

This will allow us to incrementally deploy service updates.

Monitoring, Logging, and Root Cause Isolation

We have tools to visualize service health in our our current production deployment that are pretty good. We have a unified view of our two data centers and use statsd and other monitoring tools to keep tabs on our service. Our ability to spelunk log messages is somewhat limited and can require logging into to multiple production machines.

These need to drastically improve. We need reliable mechanisms for understanding global system health (all data centers), and we need better tools for isolating root cause of issues within a single data center.

I think this work will require we:

  1. Construct each DC so that it can send high frequency health confirmation to a centralized aggregator (see the proposed format for these updates).
  2. Have per-data center dashboards the are hosted in the data center and allow both a redundant means to check DC health, as well as visualizations to facilitate root cause analysis.
  3. Have better tools to perform a realtime distributed search of server logs (the privilege to execute queries must remain available to only a small and trusted group of people, and we can continue to aggressively purge logs).

What's Next?

The purpose of this blog post is to help folks understand precisely the approach we're taking in scaling Persona to the ludicrously high availability that we must achieve for a system of this ambition. As always, we'll continue to report our progress in blogs, and on our mailing list.

If you've got great experience tackling any of the problems that face us, I'd encourage you to chime in on our mailing list and contribute your advice!