A History of the Virtual Synchrony Replication Model [chapter]

Ken Birman
2010 Lecture Notes in Computer Science  
In October of 2015, we celebrated the 50 th anniversary of SOSP as part of SOSP 25. Peter Denning formed a "history day" steering committee, and invited me to give a short talk on the topic of fault tolerance (and also asked if I could help organize the remainder of the day). This essay is intended as an accompaniment to the video and slides of my talk. My topic here is dominated by two fairly specific questions, central to the way we think about the discipline: Must strong properties bring
more » ... lexity, poor scalability, high latencies and other significant costs? Can a system support consistency without dictating to its users? The debate surrounding these issues has animated the community at least since the mid 1980's. Any decision to focus dictates a degree of narrowness. For example, Butler Lampson has argued that one cannot have security without reliability, and vice versa [25] , and the SOSP 2015 program supports his view. Nonetheless, I won't be discussing security here. I hope that nobody is offended by my omission of this and other important work; just like the other history day speakers, I was required to keep the scope of my talk manageable and focused, and omissions were unavoidable. Introduction. Fault tolerance has been important to the operating systems community from its earliest days, but the term has completely different meanings within distinct subsets of the community. In this essay, I'll touch upon several of those many meanings, but for brevity and clarity will focus on a more restricted question: Is fault tolerance (specifically, approaches that use consistent data replication) at odds with the most basic principles of the operating systems field and community? That one might ask such a question may seem perplexing to those new to our field: if you explore the program for SOSP 2015, you'll quickly see that fault tolerance has become one of the most dominant themes: six papers are concerned with Paxos, and another three with transactional mechanisms, and beyond those nine, others explore consistency and correctness after failures. In 2015, the SOSP community is very much a fault tolerance and consistency community. However, this was not always so. In 1993 SOSP was torn by a huge debate associated with a paper by Cheriton and Skeen entitled Understanding the Limitations of Causal and Totally Ordered Communication [1] . What we've come to refer to as the CATOCS controversy [2][3][4] centered not so much on whether one could build communication tools and platforms that offer strong fault tolerance and consistency guarantees, but whether in doing so one arrived at application-specific mechanisms, not belonging in the operating system. The underlying theme was that consistency mechanisms don't perform or scale adequately to belong to the core systems area. Hence, the "S" in CATOCS. The 2015 SOSP program represents one form of judgement on that question, but in some sense, isn't a direct response to the question: none of those papers were really focused on core operating systems components or advocating new principles, and none referred back to the CATOCS controversy. Indeed, as of 2015, the CATOCS argument itself had reemerged in a new form. The modern version is associated with Eric Brewer's CAP conjecture, capturing a line of thought that I first became aware of after Eric gave a keynote talk at PODC in 2000 [5] and then presented his SEDA paper at SOSP in 2001 [6] . Eric suggested that there may be deep tradeoffs between consistency (by which he meant database-style ACID guarantees), availability, and partition tolerance of large-scale systems. Inktomi, a scalable system Eric and his group created for web search, leveraged CAP to gain better performance and scalability, and Eric argued that other web-based systems could do so as well. Later, a more general version of CAP came to be widely adopted by the cloud computing community (today we might call this the PaaS community: Platform as a Service, which is one of the main styles of using the cloud, and refers to applications created on subsystems like Google's AppEngine or Microsoft's Azure platform). CAP shaped several SOSP papers too, notably the Amazon Dynamo paper in 1997 [7], and eBay's recommendation that cloud developers reject ACID and embrace BASE [8]. A version of CAP was soon proved in a paper by Gilbert and Lynch [9], although for a fairly narrow scenario. CAP has become the emblem of an outspoken community that builds big cloud platforms and believes that strong forms of consistency and fault tolerance are at odds with scalability, fast response and overall system capacity. Whereas CATOCS never won a large following, CAP has succeeded in this sense. Yet how can an embrace of inconsistency not sound reckless and wrong? At the very least, we as a field really should try to understand the underlying rationale. Yet doing so isn't trivial: CATOCS and CAP are both somewhat ambiguous concepts, and even the proponents aren't entirely clear about what these principles really mean, or why the BASE methodology is sound. The CAP community seems to think of CAP as a very broad and universal principle, and yet the Gilbert and Lynch paper points out that their CAP theorem wouldn't hold if the definitions are relaxed in any substantial way, and even offer an example of a practical scenario under which one really can have all three properties at once. To me this highlights a tension within our community. Many practitioners, working down in the trenches, have become convinced that cloud-scale systems just can't afford consistency. Meanwhile, the 2015 SOSP community seems to believe that CAP is just a practical obstacle that can be surmounted with clever systems work. Which perspective is the right one? The answer isn't completely obvious, because the SOSP community isn't always right: sometimes developers and users can see an obvious truth that the research community has completely overlooked! When you look more closely at the CAP community, it is striking that they cite CAP in settings quite far from those Eric Brewer had in mind. As noted, the CAP theorem really is very narrow: it points to a tradeoff in a situation where transactions run against a highly available database split over two data centers. To get the result, the authors set up a case in which the system must guarantee availability for both replicas, even during periods when the WAN link connecting them is down, and then present the two subsystems with conflicting transactions: not a particularly general scenario. In contrast, the contemporary "CAP community" often takes CAP to mean that we should abandon consistency even in the first tier systems running in a single data center, when a network partition would either shut down the impacted computers, or the whole data center. For the theory community, this looks like a misapplication of CAP. But if pressed, the community that believes most strongly in CAP just simplifies it to CP: they tend to argue that consistency has a huge performance impact, and that in settings where scalable performance is the main goal, weakening consistency brings tremendous speedups. This sort of practical rule of thumb may not be what the CAP theorem expresses, but it does seem to be close to what Eric was thinking about, and it clearly has a strong following.
doi:10.1007/978-3-642-11294-2_6 fatcat:3zm4nmix4vf6tnjznjqpaltpfi