Position paper: the science of deep specification

Andrew W. Appel, Lennart Beringer, Adam Chlipala, Benjamin C. Pierce, Zhong Shao, Stephanie Weirich, Steve Zdancewic
2017 Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences  
One contribution of 8 to a discussion meeting issue 'Verified trustworthy software systems' . Modern hardware and software are monstrously complex. The best tool for coping with this complexity is abstraction-i.e. breaking up functionality into components or layers, with interfaces that are as narrow and clear as possible. Each interface is accompanied-implicitly or explicitlyby a specification expressing the contract between providers and consumers of that interface. These specifications come
more » ... n a multitude of different forms: comments in code and natural-language documentation, unit tests, assertions, contracts, static types, property-based random test suites and formal specifications in various logics. Sadly, despite widespread agreement on the importance of abstraction, specifications are often seen as an afterthought, or even a hindrance, to system development. Why? Experience has shown that it is extremely challenging to write good ones. Indeed, a maximally useful interface specification must be simultaneously rich (describing complex component behaviours in detail), two-sided (connected to both implementations and clients), formal (written in a mathematical notation with clear semantics to support tools such as type checkers, analysis and testing tools, automated or machine-assisted provers, and advanced IDEs (integrated development environments)) and live (connected via machine-checkable proofs to the implementation and client code). We call specifications with all of these properties deep specifications. Most present-day interface specifications fall short in one or more of these dimensions. In many programming environments, the machine-checkable parts of an interface are just type declarations that specify the shapes of the inputs and outputs of a component. Richer aspects of the component's behaviour are either explained in comments (which are unconnected to the code and may or may not be kept up to date) or left entirely implicit. Some environments allow interfaces to be augmented with simple pre-and post-conditions or unit tests. These can help enormously, but they are limited in the properties that they can express. Other environments offer richer capabilities for writing down specifications-e.g. formal languages like Z [1], Alloy [2] or AADL [3]-but these are generally concerned with specifying properties of system models rather than the actual software under development. The paucity of deep specifications in mainstream software engineering imposes a mental tax on programmers that discourages innovation. Fortunately, in the research community the situation is rapidly changing, spurred by recent progress in the language of specifications (new techniques for specifying semantics, availability of higher-order logics), the methods of connecting them to programs (new proof-search algorithms and decision procedures, expressive logical frameworks) and the adoption of programming models that are easier to reason about (on the one hand, functional programming languages with optimizing compilers and high-performance garbage collectors; on the other hand, streamlined subsets of C with clean proof theories). These advances, driven by the maturation of automatic and interactive theorem-proving technology, suggest that it is now possible to radically alter the state of the art. The result is that technologies for automatic or machine-assisted program verification-once believed to be completely impracticable [4]-are now becoming widespread. A compelling demonstration of the viability of machine-verified development of real systems is Leroy's CompCert project [5] , which specified, implemented and proved the correctness of an optimizing C compiler. CompCert employs many characteristics of deep specifications: (i) The extremal languages and also all compiler-intermediate languages are equipped with precise operational semantics, expressed as inductive relations in the Coq proof assistant. These rich internal interfaces structure the development. (ii) Each transformation is programmed in Coq's functional language Gallina and proved to preserve safety and functional correctness with respect to its two enclosing interfaces. Proofs are constructed interactively as proof scripts in Coq's vernacular, making use of tactics and other automation features. (iii) A running compiler is obtained by composing the Gallina functions of all transformations, extracting the resulting code into OCaml and using the standard OCaml compilation framework. (iv) The proof scripts yield formal proof objects in a variant of the calculus of inductive constructions (CiC) for which checking of proofs amounts to type checking and is fully automatic and independent of the original proof scripts.
doi:10.1098/rsta.2016.0331 pmid:28871056 fatcat:ztgho4isajbchi4bghlegxns7m