Approximate Learning of Limit-Average Automata

Jakub Michaliszyn, Jan Otop, Michael Wagner
2019 International Conference on Concurrency Theory  
Limit-average automata are weighted automata on infinite words that use average to aggregate the weights seen in infinite runs. We study approximate learning problems for limit-average automata in two settings: passive and active. In the passive learning case, we show that limit-average automata are not PAC-learnable as samples must be of exponential-size to provide (with good probability) enough details to learn an automaton. We also show that the problem of finding an automaton that fits a
more » ... en sample is NP-complete. In the active learning case, we show that limit-average automata can be learned almost-exactly, i.e., we can learn in polynomial time an automaton that is consistent with the target automaton on almost all words. On the other hand, we show that the problem of learning an automaton that approximates the target automaton (with perhaps fewer states) is NP-complete. The abovementioned results are shown for the uniform distribution on words. We briefly discuss learning over different distributions.
doi:10.4230/lipics.concur.2019.17 dblp:conf/concur/MichaliszynO19 fatcat:2wfyl3juwnagndmiibp34qdl4q