Introduction
Facebook (Meta) application handles billions of requests per second. The responses are built using thousands of items that needs to retrieved at low latencies to ensure good user experience. Low latency is achieved by retrieving the items from a cache. Once their business grew, a single node cache like Memcached would obviously not be able to handle the load. This paper is about how they solved the problem by building a distributed cache on top of Memcached.
In 2025, some might say the paper is a little dated ( 2013). But I think it is still interesting for several reasons. It is one of the early papers from a time when cloud and distributed systems exploded. I see it in the same category as the Dynamo paper from Amazon. While better technology is available today, this paper teaches important concepts. More importantly the paper shows how to take technology available and make more out it. In this case, they took a single node cache and built a distributed cache on top of it.
This is my summary of the paper Scaling Memcache at Facebook.
Requirements at Facebook
- Allow near real time communication
- aggregate content on the fly
- access and update popular shared content
- scale to millions of user requests per second
The starting point was single node Memcached servers.
The target was a general purpose memory distributed key value store - called Memcache, that would be used by a variety of application use cases.
In the paper and in this blog, Memcached refers to the popular open source in memory key value store and Memcache is the distributed cached that Facebook built.
Observations:
Read volumes are several orders of magnitude higher than write volumes.
Data fetched from multiple sources - HDFS, MySql etc
Memcached supports simple primitives - set, get, delete
Details
Memcache is a demand filled look aside cache. There can be thousands of memcached servers within a memcache cluster.
When an application needs to read data, it tries to get it from memcache. If not found in Memcache, it gets the data from the original source and populates Memcache.
When the application needs to write data, it writes to original source and the key in Memcache is invalidated.
Wide fan out: When the front end servers scale, the backend cache needs to scale too.
Items are distributed across Memcached servers using consistent hashing.
To handle a request, front end server might need to get data from many Memcached servers.
Front end servers use a Memcache client to talk to memcached servers. Client is either a library or a proxy called mcrouter. Memcached servers do not communicate with each other.
Invalidation of the Memcached key is done by code running on the storage. It is not done by the client.
Communication
UDP is used for get requests. (Surprised ? clearly an optimization). Direct from client in webserver to Memcached. Dropped requests are treated as a cache miss.
TCP via mcrouter used for set and delete requests. Using mcrouter helps manage connections to storage.
Client implements flow control to limit load on backend components
Leases
Leases were implemented to address stale sets and thundering herd. Stale sets are caused by updating the cache with invalid values. Requiring a lease lets the system check that update is valid.Thundering herd happens when there is heavy read write activity on the same key at the same time. By handing out leases only every so often say 10 sec, they slow things down.
Memcache pools
This is a general purpose caching layer used by different applications, different workloads with different requirements. To avoid interference, the clusters servers are partitioned into pools. For example, a pool for keys that are accessed frequently and cannot tolerate cache miss. Another pool for infrequently accessed keys. Each pool can scaled separately depending on requirement.
Failures
For small failures, the requests are directed to a set dedicated backup servers called gutters. When a large number of servers in the cluster down, the entire cluster is considered offline and traffic is routed to another cluster.
Topology
A frontend cluster is a group of webservers with Memcache.
A region is frontend cluster plus storage.
This keeps the failure domains small.
Storage is the final source of truth. Use mcsql and mcrouter to invalidate cache.
When going across data centers and across geo regions, the storage master in one region replicates to replicas in other regions. On an update, when the Memcached needs to be invalidated by the storage, it is not a problem if the update is in the master region. But if the update is in the replica region, then the read after write might read a stale data as the replica might not have caught up. In replica regions, markers are used to ensure that only data in sync with the master is read.
Summary
In summary, this paper show how Facebook took a single node Memcached and scaled it to its growing needs. No new fundamental theories are applied or discussed. But this demonstrated innovation and trade offs in engineering to scale and grow in product in production that needs to scale to meet user demand.
Key point is that separation cache and storage allowed each to be scaled separately. Kept focus on monitoring, debugging and operational efficiency. Gradual rollout and rollback of features kept a running system running.
Some might say that Memcache is a hack. But some loss in architectural purity is worth it -- if your users and business stay happy.
Memcache and Facebook were developed together with application and system programmers working together to evolve and scale the system. This does not happen when teams work in isolation.
.jpg)