20.2. Replication Models
We have three different models available to us in replication. They trade off between the notions of latency, autonomy, and some of the other considerations we discussed earlier in the chapter. Deciding which to choose is something of a balancing act between:
Degree of autonomy — Is there a constant connection available between the servers? If so, what kind of bandwidth is available? How many transactions will be replicating?
Conflict management — What is the risk that the same data will be edited in multiple locations either at the same time or in between replicated updates? What is the tolerance for data on one or more of the replicated servers disagreeing?
Some replication scenarios don't allow for connectivity except on a sporadic basis — others may never have connectivity at all (save, perhaps, through what is sarcastically referred to as "sneaker net" — where you run, mail, fly, or the like, a disk or other portable storage medium from one site to another). Other replication scenarios have an absolute demand for perfectly consistent data at all sites with zero data loss.
From highest to lowest in autonomy, the three models are:
Let's look at the pros and cons of each replication model, outlining situations where it would be an appropriate solution and any data integrity concerns.
It's important to note that you can mix and match the replication types as necessary to meet your implementation ...