High Level Methodology
- Do one thing and do it well
- Services should be split up into ones that do one thing, makes performance analysis easier and easier for the system to evolve over time
- Prefer simple, stateless services where possible
- State must be kept, but try to reduce the surface area of the state. Stateless components are easy to manage because they’re trivial to scale horizontally
- Assumptions, constraints, and SLO’s
- Before you start designing, make sure to look at the constraints and non-functional requirements for your system. Have they been state? Are they comprehensive?
- At minimum you should understand
- What the system is supposed to do (business logic)
- What the SLOs are around performance, error rates/availability, data loss? If no SLOs, you made need to define them with business owners
- Where should it run? What is the budget or what hardware is available?
- Expected workload (this may be difficult to gauge)
- Getting to an initial high level design (or analysing an existing one)
- You need to figure out the API for the system. What operations are available externally? What data goes in, and what results comes out? Do operations change the system state, or is it read-only?
- Each set of closely related operations is likely to be a service in the system. Many systems have a stateless API layer that coordinates with other parts of the system to serve results
- Follow the data
- After the API, figure out the data flow. What state does the system need to read and/or write in order to do the work? What do we know about the state?
- Data consistency - critical data
- Some applications require very consistent views of critical state
- This should be tackled with distributed consensus algorithms to manage state changes, 3/5/+ replicas in different failure domains
- Fastpaxos uses a stable leader process to improve performance. Performance for FastPaxos is one rtt between the leader and qurom of closest replicas to commit and update or a consistent read. The client must communicate with the leader, so the percieved latency for the client is the RTT between client/leader + leader/nearest quorum. Stale reads require a roundtrip to the nearest healthy replica, but the data may not be consistent. Other transactions can be batched
- Data consistency - less critical data
- Not all data needs to highly consistent, in these cases replication + backups is often enough.
- Data storage considerations
- Lots of things to think about when storing data, most fundalmental is where are you going to organize the data in RAM and disk
- Think about how data is read and written. Which is the most common use case? Usually it’s reading.
- Think about storing data to match the most common use case. If you’re reading in a chronological order by user, consider storing it like that
- Sharding data
- Data obviously needs to be sharded, but watch out for hot shards
- Easiest way to shard is m shardes on n servers, where m > 100x n
- m shards on n servers, where m > 100x n - shard = hash(key) modulo m - keep a map of shard -> server that looks it up each time (needs STRONG CONSISTENCY) - system can dynamically move shards and updates the map to rebalance based on some QPS count
- sharded and replicated datastores should have automatic mechanisms that cross-check state with other replicas and load lost state from peers, useful after downtime and when new replicas are added. Note that this should also be rate-limited somehow to avoid thundering herd problems where many replicas are out of sync
- Scaling and Performance
- Make sure to do some back of the envelope calculations to estimate the potential performance of the system, any scaling limits created by the bottlenecks in the system, and how many machines are likely to be needed to service the given workload
- Calculations to use
- Disk - sanity check the volume of data read/written, disk seeks over time
- RAM - how much can we cache? Are we storing an index of some kind?
- Bandwidth - size of requests inwards and outwards, do we have enough bandwidth?
- Bandwidth between datacenters and bandwidth between machines in the same DC
- CPU - is this a computationally intensive service? - this can be hard to gauge
- What are the estimated concurrent transactions for a service (compute based on throughput per second and latency, e.g. 500 requests per second, 200ms average latency -> 500/(1000/200) -> 100 transactions in flight) - is this a reasonable number? Each request needs some RAM to manage state and some CPU to process
- Iterating and refining
- After a complete design, time to take another look. You might’ve unearthed new constraints
- Does it work correctly?
- Are you meeting your SLOs and constraints?
- Is it as simple as it could be?
- Is it reliable enough?
- Is it making efficient use of hardware?
- What assumptions did you make and are there risks?
- Are there places you can improve caching, indexing, lookup filters like Bloom filters? Can you degrade gracefully when overloaded by doing cheaper operations?
- Montiring should be included in the design phase of any distributed system
- It needs to
- Detect outages (and alert)
- Provide information on whether you’re meeting your SLOs
- Give indications of growth and system capacity
- Assist in troubleshooting outages and less critical problems
- Design metrics for your system and indicate how you will collect data. What are your alert thresholds/conditions?
- What could go wrong?
- Answer these questions
- What is the scariest thing that can happen to your system?
- How does your system defend against it?
- Will it continue to meet its SLO? How can you recover, if not?
- Can improvements be made to the system to prevent the problem, and what would they cost?
- Loss of a data center
- Loss of important state/data
- Bugs in critical software component
- Load spikes
- Queries/requests of death that can crash the process
- 10x growth in workload - will it scale?
- Change in data access patterns changes cache hit rate
- Answer these questions
Errors and Examples