Learning Interpretable Error Functions for Combinatorial Optimization Problem Modeling [article]

Florian Richoux, Jean-François Baffier
2021 arXiv   pre-print
In Constraint Programming, constraints are usually represented as predicates allowing or forbidding combinations of values. However, some algorithms exploit a finer representation: error functions. Their usage comes with a price though: it makes problem modeling significantly harder. Here, we propose a method to automatically learn an error function corresponding to a constraint, given a function deciding if assignments are valid or not. This is, to the best of our knowledge, the first attempt
more » ... o automatically learn error functions for hard constraints. Our method uses a variant of neural networks we named Interpretable Compositional Networks, allowing us to get interpretable results, unlike regular artificial neural networks. Experiments on 5 different constraints show that our system can learn functions that scale to high dimensions, and can learn fairly good functions over incomplete spaces.
arXiv:2002.09811v4 fatcat:oilv5cgmcrhzfkf4ilu6z4xugq