In this InfoQ video presentation Amazon's CTO Dr Werner Vogels discuss about availability and consistency for distributed systems. The central item is the "CAP theorem", Dr Vogels describes it starting by this question:
What goals might you want from a shared-data system?
- Strong Consistency: all clients see the same view, even in presence of updates
- High Availability: all clients can find some replica of the data, even in the presence of failures
- Partition-tolerance: the system properties hold even when the system is partitioned
The theorem states that you can always have only two of the three CAP properties at the same time. The first property, Consistency, has to do with ACID systems, usually implemented through the two-phase commit protocol (XA transactions).
In his presentation Dr Vogels explain why big shops like Amazon and Google, as they handle an incredibly huge number of transactions and data, always need some kind of system partitioning. Amazon then must provide high availability, for example a customer must always has access to the shopping cart, because it obviously means that the customer is committing to buy something. As for Amazon the third and second CAP properties (Availability and Partitioning) are fixed, they need to sacrifice Consistency. It means they prefer to compensate or reconcile inconsistencies instead of sacrificing high availability, because their primary need is to scale well to allow for a smooth user experience.
This IMHO leads to some easy conclusions: most legacy application servers and relational database systems are built with consistency as their primary target, while big shops really need high availability. That's why firms like Google or Amazon have developed their own applicative infrastructure. That's why, as Dr Vogels presentations explain well, a two-phase commit protocol is never an appropriate choice in case of big scalability needs. On this subject you can also read this article from Gregor Hohpe: Your Coffee Shop Does Not Use Two-Phase Commit
To scale-up what you really need are asynchronous, stateless services, together with a good reconciliation and compensation mechanism in case of errors. Second, your data model has a dramatic impact on performances, that's why Amazon has implemented a simple put/get API instead of running complex database queries, and why Google performances are due to the MapReduce algorithm: simplicity rules.
The distinctions between availability and partitionability in this context are subtle, and I'm not sure the definitions above capture them. One of the key insights that I got from the Dynamo paper is that availability means availability *now* - i.e. it's forfeited even by a system that's merely slow. Since enforcing strong consistency often precludes replying immediately to a request, that sets up the tension between consistency and availability. That tension can be resolved . . . if we don't also have to deal with outright failures and unreachability (which are partitionability issues). Apply the same idea - of attribute pairs already being in tension or balance which can't be resolved in the presence of the third attribute - to the other combinations, and you have CAP.
ReplyDelete