Strongly Unambiguous Büchi Automata Are Polynomially Predictable With Membership Queries

Dana Angluin, Timos Antonopoulos, Dana Fisman, Michael Wagner
2020 Annual Conference for Computer Science Logic  
A Büchi automaton is strongly unambiguous if every word w ∈ Σ ω has at most one final path. Many properties of strongly unambiguous Büchi automata (SUBAs) are known. They are fully expressive: every regular ω-language can be represented by a SUBA. Equivalence and containment of SUBAs can be decided in polynomial time. SUBAs may be exponentially smaller than deterministic Muller automata and may be exponentially bigger than deterministic Büchi automata. In this work we show that SUBAs can be
more » ... ned in polynomial time using membership and certain non-proper equivalence queries, which implies that they are polynomially predictable with membership queries. In contrast, under plausible cryptographic assumptions, non-deterministic Büchi automata are not polynomially predictable with membership queries. ACM Subject Classification Theory of computation → Automata over infinite objects; Theory of computation SUBAs Are Polynomially Predictable With MQ of the L * algorithm [2], an algorithm for inferring a regular language using membership queries and equivalence queries. Indeed, L * or its improved descendants (see in [22]) have been used for tasks including black-box checking [29], assume-guarantee reasoning [18], specification mining [1], error localization [14], learning interfaces [28], regular model checking [24], finding security bugs [13], code refactoring [27, 31], learning verification fixed-points [33], as well as analyzing botnet protocols [15] and smart card readers [13]. A disadvantage of using L * in applications that model behavior using ω-words is that it limits the learned languages to the class of safety languages, a strict subset of the regular ω-languages, for which the complement can be described by a language of finite words. However, many interesting properties of reactive systems, in particular, liveness and fairness, require richer classes of regular ω-languages. For this reason it is desirable to obtain a learning algorithm for the full class of regular ω-languages. Learnability results regarding the class of regular ω-languages can be summarized shortly as follows. The full class of regular ω-languages can be learned either using a non-polynomial reduction to finite words, termed (L) $ [21], or using a representation by families of DFAs (FDFA), which may be exponentially more succinct than (L) $ , although the running time of the algorithm may be polynomial in (L) $ in the worst case [5] . The maximal sub-class of the regular ω-languages which is known to be polynomially learnable is the set of languages accepted by deterministic weak parity automata (DwPA) [25] . In this work we show that while under plausible crypotographic assumptions, the class of ω-regular languages is not polynomially predictable with membership queries when the target language is represented using a non-deterministic Büchi automaton (Theorem 1), it is polynomially predictable with membership queries when the target language is represented using a strongly unambiguous Büchi automaton (Corollary 15). The result on polynomial predictability with membership queries of strongly unambiguous Büchi automata (SUBA) is a corollary of a result (Theorem 12) on learning SUBAs in polynomial time using membership and non-proper equivalence queries (where hypotheses are represented using mod-2-MAs for a related language of finite words). 1 This contrast in learnability results for the class of regular ω-languages arises because the running time of the learning algorithm is bounded as a function of the size of the representation of the target language, and NBAs (non-deterministic Büchi automata) may be exponentially more succinct than SUBAs. Thus we also focus on succinctness comparisons between alternative representations. In §2, we provide the preliminaries regarding Büchi automata and strongly unambiguous Büchi automata. In §3, we discuss the framework of learning with membership queries (MQs) and equivalence queries (EQs), and discuss related learnability results for regular ω-languages. In §4, we discuss the framework of polynomial predictability with MQs; relate it to the framework of learning with MQs and EQs; and provide the negative result regarding learnability using NBAs. The positive result about learnability using SUBAs is proved in §7, after some more necessary definitions are provided. Complexity of learning algorithms is measured with respect to the size of the representation of the unknown target language. We thus provide, in §5, size comparison results between SUBAs and other models of regular ω-languages for which learning algorithms have been obtained. We show that SUBAs may be exponentially more succinct than FDFAs and DwPAs, but the other direction is also true: FDFAs and DwPAs can be exponentially more succinct than SUBAs. We further show that SUBAs can be exponentially more succinct than the DFA representation for (L) $ or its reverse.
doi:10.4230/lipics.csl.2020.8 dblp:conf/csl/AngluinAF20 fatcat:gqrbomby4jcytg57umnctwqdzi