- rate limiting with https://github.com/mailgun/gubernator
- distributed caching library: https://github.com/golang/groupcache
- Building an elixir cache with ETS: https://www.openmymind.net/Building-A-Cache-In-Elixir/%C3%AC
Semi-random eviction within Redis (prior to 6.0)
- Randomly probe the cache and evict entries, have a process every few milliseconds that checks a random set of keys, and see if they expire
- if >25% are about to expire, repeat this process again
- if <25%, then sleep and poll for the next round
- Downsides: probablistic, which means at scale this probably has fat tails
Caching Blog Post
I read a Hacker News post about how Redis used to evict cache keys prior to version 6.0:
Every N milliseconds:
- Pick a random set of keys (say, 50):
- Check how many have expired
- If the number is > 25%, then evict and repeat
- Otherwise evict and sleep for N milliseconds
I couldn’t actually find any information that detailed this, although I did find a blog post about the LFU and LRU policies around Redis eviction policies circa 2018. I found something similar to what the author was referencing in the source, although that doesn’t seem to exact be it either: https://github.com/redis/redis/blob/5.0/src/evict.c#L531-L5501.
Nevertheless, this simple algorithm spawned a discussion between me and a friend on how these probablistic algorithms have fat tails, and the assumptions that this algorithm makes. For example, for this simple random eviction policy to work, all the values need to be within a constant range of sizes. If you have a few values that are magnitudes larger than others, at enough scale you could end up in situations where the random scanning algorithm fails to catch the fat values for multiple cycles, which then puts more pressure on the eviction policy.
My friend then pointed to https://research.google/pubs/pub48030/, a paper that describes some of the issues with the remote caching mechanisms. The paper itself is worth reading, although the problems it recapitualtes will be familiar: serialization costs are annoyingly high, protocols add an additional layer of complexity, the usage of a K/V model means that you could be attempting to read an entire JSON blob when you just need a single element, and that network latency is non-trivial.
I think the bulk of the problem here comes from the implication that the cache should not be co-located with the caller. The addition of “cache managers” such as Redis Sentinel and distributed Memcache are a concession to this: caches cannot typically be held within a single machine, and that optimal performance requires the cache to understand more than just the actual data: the caching layer must have some additional meta-information on how the caller is planning on using it. The Google paper makes the case for an intrusive, distributed in-process cache that effectively acts as a smarter remote cache, which can learn from your calling patterns. This isn’t too different from Erlang’s ETS. The idea is that we can cut down dramatically on serialization and network hop costs by reducing the amount of information read with native language constructs, which I’m in full agreement.
However, all of this makes the assumption that we can’t deploy fat, tall servers where your cache is co-located with your application. Obviously the Google paper does not attempt to address this point because it’s not feasible for Google, but this type of co-location might be easier than you’d expect. In the past I’ve successfully run very tall servers co-located with Kyoto Tycoon and communication over Unix sockets. We’re not locked into the single small server model, the fat server model also solves the problems of network hops by removing it and solves the overreading problem by simply assuming that the costs of IPC communication is so low compared to the network hop that the extra data doesn’t matter. While this design certainly won’t work at large enough scale, I think it’s actually fairly difficult to run into real world problems that scale further than this if you’re not at FAANG scale.
I always forget how readable the Redis source is until I go back to it. ↩︎