Bisimulation and cocongruence for probabilistic systems

Vincent Danos, Josée Desharnais, François Laviolette, Prakash Panangaden
2006 Information and Computation  
We introduce a new notion of bisimulation, called event bisimulation on labelled Markov processes and compare it with the, now standard, notion of probabilistic bisimulation, originally due to Larsen and Skou. Event bisimulation uses a sub -algebra as the basic carrier of information rather than an equivalence relation. The resulting notion is thus based on measurable subsets rather than on points: hence the name. Event bisimulation applies smoothly for general measure spaces; bisimulation, on
more » ... he other hand, is known only to work satisfactorily for analytic spaces. We prove the logical characterization theorem for event bisimulation without having to invoke any of the subtle aspects of analytic spaces that feature prominently in the corresponding proof for ordinary bisimulation. These complexities only arise when we show that on analytic spaces the two concepts coincide. We show that the concept of event bisimulation arises naturally from taking the cocongruence point of view for probabilistic systems. We show that the theory can be given a pleasing categorical treatment in line with general coalgebraic principles. As an easy application of these ideas we develop a notion of "almost sure" bisimulation; the theory comes almost "for free" once we modify Giry's monad appropriately. Introduction Markov processes with continuous-state spaces or continuous time evolution (or both) arise naturally in several fields of physics, biology, economics, and computer science. Examples of such systems are brownian motion, gas diffusion, population growth models, noisy control systems, and communications systems. Labelled Markov processes (LMPs) were formulated [2,7] to study such general interacting Markov processes. In an LMP, instead of one transition probability function (or Markov kernel) there are several, each associated with a distinct label. We do not consider internal non-determinism in the present paper. Each such transition probability function represents the (stochastic) response of the system to an external stimulus represented by the label. In our work, we do not associate probabilities with these external stimuli; in other words, we do not intend to quantify the behaviour of the environment. Thus, for those familiar with process algebra terminology, an LMP is a labelled transition system with probabilistic transitions. The interaction is captured by synchronizing on labels in the manner familiar from process algebra. The following example, taken from [11], illustrates these ideas. Example 1.1. Consider the flight management system of an aircraft. It is responsible for monitoring the state of the aircraft-the altitude, windspeed, direction, roll, yaw etc.-periodically (usually several times a second), it also monitors navigational data from satellites and makes corrections, as needed, by issuing commands to the engines and the wing flaps. The physical system is a complex continuous real-time stochastic system; stochastic because the response of the physical system to commands cannot be completely deterministic and also because of unexpected situations like turbulence. From the point of view of the flight management system, however, the system is discrete-time and has continuous space. The time unit is the sampling rate. The entire system consists of many interacting concurrent components and programming it correctly-letting alone verifying that the system works-is very challenging. A formal model of this type of software brings us into the realm of process algebra, because of the concurrent interacting components, stochastic processes and real-time systems, the last because the responses have hard deadlines. This study was initiated by Larsen and Skou [18] for discrete processes in a style similar to the queueing theory notion of "lumpability" invented in the late 1950s [17] . In a series of previous papers [2, 6, 7] , such Markov processes with continuous-state spaces and independently acting components were studied, and the phrase "labelled Markov processes" appeared in print explicitly referring to the continuous-state space case. Of course, closely related concepts were already around: for example, Markov decision processes [20] . The papers by Desharnais, Edalat, and Panangaden gave a definition of bisimulation between LMPs, and gave a logical characterization of this bisimulation. Subsequently, an approximation theory was developed [9, 11, 3] and metrics were defined [8, 12, 22, 21] . Before we begin the present paper we will briefly review the prior results. The notion of probabilistic bisimulation-henceforth just "bisimulation"-was based on the idea that if two states are
doi:10.1016/j.ic.2005.02.004 fatcat:cjyjhtbbnbbyjdkndb5d6brq7a