Run-Time Enforcement of Nonsafety Policies

Jay Ligatti, Lujo Bauer, David Walker
2009 ACM Transactions on Privacy and Security  
A common mechanism for ensuring that software behaves securely is to monitor programs at run time and check that they dynamically adhere to constraints specified by a security policy. Whenever a program monitor detects that untrusted software is attempting to execute a dangerous action, it takes remedial steps to ensure that only safe code actually gets executed. This article improves our understanding of the space of policies enforceable by monitoring the run-time behaviors of programs. We
more » ... n by building a formal framework for analyzing policy enforcement: we precisely define policies, monitors, and enforcement. This framework allows us to prove that monitors enforce an interesting set of policies that we call the infinite renewal properties. We show how to construct a program monitor that provably enforces any reasonable infinite renewal property. We also show that the set of infinite renewal properties includes some nonsafety policies, i.e., that monitors can enforce some nonsafety (including some purely liveness) policies. Finally, we demonstrate concrete examples of nonsafety policies enforceable by practical run-time monitors. · Jay Ligatti et al. network configurations, raises exceptions, warns the user of potential consequences of opening a file, etc., as containing a program monitor inlined into the application. Even "static" mechanisms, such as type-safe-language compilers and verifiers, often ensure that programs contain appropriate dynamic checks by inlining them into the code. This article examines the space of policies enforceable by program monitors. Because program monitors, which react to the potential security violations of target programs, enjoy such ubiquity, it is important to understand their capabilities as policy enforcers. Such an understanding is essential for developing sound systems that support program monitoring and languages for specifying the security policies that those systems can enforce. In addition, well-defined boundaries on the enforcement powers of security mechanisms allow security architects to determine exactly when certain mechanisms are needed and save the architects from attempting to enforce policies with insufficiently strong mechanisms. Schneider defined the first formal models of program monitors and discovered one particularly useful boundary on their power [Schneider 2000 ]. He defined a class of monitors that respond to potential security violations by halting the target application, and he showed that these monitors can only enforce safety propertiessecurity policies that specify that "nothing bad ever happens" in a valid run of the target [Lamport 1977] . When a monitor in this class detects a potential security violation (i.e., "something bad"), it must halt the target. Aside from our work, other research on purely run-time program monitors has likewise only focused on their ability to enforce safety properties. In this article, we advance our theoretical understanding of practical program monitors by proving that certain types of monitors can enforce nonsafety properties. These monitors are modeled by edit automata, which have the power to insert actions on behalf of, and suppress actions attempted by, the target application. We prove an interesting lower bound on the properties enforceable by such monitors-a lower bound that encompasses strictly more than safety properties. · 19: 3 automata [Büchi 1962] (which are like regular deterministic finite automata except that they can have an infinite number of states, operate on infinite-length input strings, and accept inputs that cause the automaton to enter accepting states infinitely often). Schneider's monitors 1 observe executions of untrusted target applications and dynamically recognize invalid behaviors. When a monitor recognizes an invalid execution, it halts the target just before the execution becomes invalid, thereby guaranteeing the validity of all monitored executions. Schneider formally defined policies and properties and observed that his automata-based execution recognizers can only enforce safety properties (a monitor can only halt the target upon observing an irremediably "bad" action). Researchers have devised many techniques for proving that programs obey such automata-specified safety properties [Walker 2000; Hamlen et al. 2006b; Aktug et al. 2008 ]. This article builds on Schneider's definitions and models but views program monitors as execution transformers rather than execution recognizers. This fundamental shift permits modeling the realistic possibility that a monitor might insert actions on behalf of, and suppress actions of, untrusted target applications. In our model, Schneider's monitors are truncation automata, which either accept the actions of untrusted targets or halt the target altogether upon recognizing a safety violation. We define more general monitors modeled by edit automata that can insert and suppress actions (and are therefore operationally similar to deterministic I/O automata [Lynch and Tuttle 1987]), and we prove that edit automata are strictly more powerful than truncation automata (Section 3.2.2). Computability Constraints on Execution Recognizers. After Schneider showed that the safety properties constitute an upper bound on the set of policies enforceable by simple monitors, Viswanathan, Kim, and others tightened this bound by placing explicit computability constraints on the safety properties being enforced [Viswanathan 2000; Kim et al. 2002] . Their key insight was that because execution recognizers inherently have to decide whether target executions are invalid, these monitors can only enforce decidable safety properties. Introducing computability constraints allowed them to show that monitors based on recognizing invalid executions (i.e., our truncation automata) enforce exactly the set of computable safety properties. Moreover, Viswanathan proved that the set of languages containing strings that satisfy a computable safety property equals the set of coRE languages [Viswanathan 2000 ]. Shallow-history Execution Recognizers. Continuing the analysis of monitors acting as execution recognizers, Fong defines shallow history automata (SHA) as a specific type of memory-bounded monitor [Fong 2004] . SHA decide whether to accept an action by examining a finite and unordered history of previously accepted actions. Although SHA are very limited models of finite-state truncation automata, Fong shows that they can nonetheless enforce a wide range of useful access-control properties, including Chinese Wall policies (where subjects may access at most one element from every set of conflicting data [Brewer and Nash 1989] ), low-water-1 Schneider refers to his models as security automata. In this article, we call them truncation automata and use the term security automata to refer more generally to any dynamic execution transformer. Section 2.3 presents our precise definition of security automata.
doi:10.1145/1455526.1455532 fatcat:2sufnxfk7vgidiirl46suxq3wm