Skip to content

Same Tradeoff

I sat through the MySQL replication debate more than once. Semi-synchronous or asynchronous. Fully synchronous was out of the question. Every replica must acknowledge the write before the master commits. Too slow for any service at scale. One unresponsive replica and the master stalls. Correct in theory. Unusable in practice.

Asynchronous replication is fast. The master commits without waiting for replicas. Write latency stays minimal. But if the master dies, any data that hasn't reached a replica is gone.

The practical middle ground was semi-synchronous. At least one replica confirms it received the binlog before the master returns the commit. Not as slow as full sync. Not as reckless as async. But "received" is not "applied." The answer sits in a careful gray zone.

While arguing about this, I felt déjà vu.

UDP and TCP. UDP fires and forgets. Fast. No idea if anything arrived. TCP waits for acknowledgment. Reliable but slow. Speed versus reliability. Early in every networking textbook.

The same question repeats at every layer. Transport layer: UDP versus TCP. Database layer: async versus semi-sync replication. Distributed systems: the CAP theorem. Message queues: at-most-once versus at-least-once. Microservices: eventual consistency versus strong consistency.

Fire and forget, or wait for confirmation. Take speed, or take consistency. The shape of the question never changes. Only the answer shifts with context. Abstraction grows as you move up the stack. The tradeoff at the root stays the same.

Computer science, in the end, may just be solving the same problem over and over in different clothes.