2,658 Hits in 7.6 sec

Combining Component Caching and Clause Learning for Effective Model Counting

Tian Sang, Fahiem Bacchus, Paul Beame, Henry A. Kautz, Toniann Pitassi
2004 International Conference on Theory and Applications of Satisfiability Testing  
Cumulative fraction of cache hits by cache age on 50-variable random 3-CNF formulas, clause/variable ratio=1.6, 100 instances Combining Component Caching and Clause Learning for Effective Model Counting  ...  Conclusions and Future Work We have presented work that makes substantial progress on combining component caching and clause learning to create an effective procedure for #SAT that does extremely well  ... 
dblp:conf/sat/SangBBKP04 fatcat:yuohmmqiyfg7bhci3x5vmlyt4m

sharpSAT – Counting Models with Advanced Component Caching and Implicit BCP [chapter]

Marc Thurley
2006 Lecture Notes in Computer Science  
Most importantly, we introduce an entirely new approach of coding components, which reduces the cache size by at least one order of magnitude, and a new cache management scheme.  ...  Furthermore, we apply a well known look ahead based on BCP in a manner that is well suited for #SAT solving.  ...  This is due to new techniques which comprise a highly optimized way of coding the components for caching and the implicit BCP algorithm that performs well in practice.  ... 
doi:10.1007/11814948_38 fatcat:gzcwhsu62jhd3ilrvflqegsdeq

Heuristics for Fast Exact Model Counting [chapter]

Tian Sang, Paul Beame, Henry Kautz
2005 Lecture Notes in Computer Science  
We recently introduced Cachet, an exact model-counting algorithm that combines formula caching, clause learning, and component analysis.  ...  schemes, and cross-component implications.  ...  Cachet is an exact model-counting algorithm which combines formula caching [7, 2, 5] , clause learning [8, 13, 14] , and dynamic component analysis [4, 2, 3] .  ... 
doi:10.1007/11499107_17 fatcat:v75nbhrwebh5bbgd7bk4iv7fmq

Learning Branching Heuristics for Propositional Model Counting [article]

Pashootan Vaezipoor, Gil Lederman, Yuhuai Wu, Chris J. Maddison, Roger Grosse, Edward Lee, Sanjit A. Seshia, Fahiem Bacchus
2020 arXiv   pre-print
The gap between the learned and the vanilla solver on larger instances is sometimes so wide that the learned solver can even overcome the run time overhead of querying the model and beat the vanilla in  ...  Propositional model counting or #SAT is the problem of computing the number of satisfying assignments of a Boolean formula and many discrete probabilistic inference problems can be translated into a model  ...  This choice affects the efficiency of clause learning and the effectiveness of component generation and caching lookup success.  ... 
arXiv:2007.03204v1 fatcat:uf562j3kozakrf3oxgvjsbtgqm

A Dynamic Approach for MPE and Weighted MAX-SAT

Tian Sang, Paul Beame, Henry A. Kautz
2007 International Joint Conference on Artificial Intelligence  
We describe reductions between MPE and weighted MAX-SAT, and show that both can be solved by a variant of weighted model counting.  ...  The MPE-SAT algorithm is quite competitive with the state-of-the-art MAX-SAT, WCSP, and MPE solvers on a variety of problems.  ...  This component processing strategy is well suited to dynamic bounding for sibling components, but it is different from that for model counting.  ... 
dblp:conf/ijcai/SangBK07 fatcat:huqbcmrzwncqvnfmdhjjsjqq2e

Centrality Heuristics for Exact Model Counting

Bernhard Bliem, Matti Jarvisalo
2019 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)  
to be of very limited use in the context of model counting.  ...  In particular, we show that the VSIDS heuristic, which is an integral search heuristic employed in essentially all state-of-the-art conflictdriven clause learning Boolean satisfiability solvers, appears  ...  Component caching tries to avoid re-computing the model count for already seen components. III.  ... 
doi:10.1109/ictai.2019.00017 dblp:conf/ictai/BliemJ19 fatcat:ndx6wqlyvbdmdbfiq77br7zeim

An Exhaustive DPLL Algorithm for Model Counting

Umut Oztok, Adnan Darwiche
2018 The Journal of Artificial Intelligence Research  
The modular design is based on the separation of the core model counting algorithm from SAT solving techniques.  ...  State-of-the-art model counters are based on exhaustive DPLL algorithms, and have been successfully used in probabilistic reasoning, one of the key problems in AI.  ...  In this case, the algorithm counts the models for each component independently and combines the results.  ... 
doi:10.1613/jair.1.11201 fatcat:wgj7krkkargrxniumyrtobqnzy

Discrete sequence prediction and its applications

Philip Laird, Ronald Saul
1994 Machine Learning  
Our experiments verify its performance on data compression tasks and show how it applies to two problems: dynamically optimizing Prolog programs for good average-case behavior and maintaining a cache for  ...  We present a simple and practical algorithm (TDAG) for discrete sequence prediction.  ...  Krishnan, Steve Minton, Andy Philips, Armand Prieditis, Jeff Vitter, and Monte Zweben  ... 
doi:10.1007/bf01000408 fatcat:dr6df3jxmndyvlqfnyg5am5da4

Dsharp: Fast d-DNNF Compilation with sharpSAT [chapter]

Christian Muise, Sheila A. McIlraith, J. Christopher Beck, Eric I. Hsu
2012 Lecture Notes in Computer Science  
One particularly effective target language is Deterministic Decomposable Negation Normal Form (d-DNNF).  ...  Knowledge compilation is a compelling technique for dealing with the intractability of propositional reasoning.  ...  Acknowledgements The authors gratefully acknowledge funding from the Ontario Ministry of Innovation and the Natural Sciences and Engineering Research Council of Canada (NSERC).  ... 
doi:10.1007/978-3-642-30353-1_36 fatcat:lzu5cavdbzbgnhtwy2ggl2glge

Optimizing Indirect Memory References with milk

Vladimir Kiriansky, Yunming Zhang, Saman Amarasinghe
2016 Proceedings of the 2016 International Conference on Parallel Architectures and Compilation - PACT '16  
A simple semantic model enhances programmer productivity for efficient parallelization with OpenMP.  ...  Modern applications such as graph and data analytics, when operating on real world data, have working sets much larger than cache capacity and are bottlenecked by DRAM.  ...  Acknowledgments We thank our anonymous reviewers and our shepherd Bronis de Supinski for their helpful probing questions and their specific suggestions for improving our presentation.  ... 
doi:10.1145/2967938.2967948 dblp:conf/IEEEpact/KirianskyZA16 fatcat:qcyof77l4nfmboitbocuxtamfe

DPLL with a Trace: From SAT to Knowledge Compilation

Jinbo Huang, Adnan Darwiche
2005 International Joint Conference on Artificial Intelligence  
These languages are decreasingly succinct, yet increasingly tractable, supporting such polynomial-time queries as model counting and equivalence testing. Our contribution is thus twofold.  ...  As interesting examples, we unveil the "hidden power" of several recent model counters, point to one of their potential limitations, and identify a key limitation of DPLLbased procedures in general.  ...  Acknowledgments We thank the reviewers for commenting on an earlier version of this paper. This work has been partially supported by NSF grant IIS-9988543 and MURI grant N00014-00-1-0617.  ... 
dblp:conf/ijcai/HuangD05 fatcat:qlgvd4xhrjaxjpii2hbbc5xege

A New Approach to Model Counting [chapter]

Wei Wei, Bart Selman
2005 Lecture Notes in Computer Science  
Many AI tasks, such as calculating degree of belief and reasoning in Bayesian networks, are computationally equivalent to model counting.  ...  It has been shown that model counting in even the most restrictive logics, such as Horn logic, monotone CNF and 2CNF, is intractable in the worst-case.  ...  The most recent additions to DPLL model counting are the ideas of component caching and clause learning [12] .  ... 
doi:10.1007/11499107_24 fatcat:kcx3wvphabbnpmjofo732qlmoe

$$\#\exists $$ SAT: Projected Model Counting [chapter]

Rehan Abdul Aziz, Geoffrey Chu, Christian Muise, Peter Stuckey
2015 Lecture Notes in Computer Science  
Projected model counting arises when some parts of the model are irrelevant to the counts, in particular when we require additional variables to model the problem we are counting in SAT.  ...  Model counting is an essential tool in probabilistic reasoning.  ...  Model counters use this property and split the residual into disjoint components and count the models of each component and multiply them to get the count of the residual.  ... 
doi:10.1007/978-3-319-24318-4_10 fatcat:5t2w54bkbzbcdfemvdigrgsfee

Projected Model Counting [article]

Rehan Abdul Aziz and Geoffrey Chu and Christian Muise and Peter Stuckey
2015 arXiv   pre-print
We discuss three different approaches to projected model counting (two of which are novel), and compare their performance on different benchmark problems.  ...  Projected model counting arises when some parts of the model are irrelevant to the counts, in particular when we require additional variables to model the problem we are counting in SAT.  ...  Model counters use this property and split the residual into disjoint components and count the models of each component and multiply them to get the count of the residual.  ... 
arXiv:1507.07648v1 fatcat:d5tvldy6yncenpv7xl5nmssxre

Decomposing SAT Problems into Connected Components

Armin Biere, Carsten Sinz, Daniel Le Berre, Laurent Simon
2006 Journal on Satisfiability, Boolean Modeling and Computation  
Many SAT instances can be decomposed into connected components either initially after preprocessing or during the solution phase when new unit conflict clauses are learned.  ...  This observation allows components to be solved individually. We present a technique to handle components within a GRASP like SAT solver without requiring much change to the solver.  ...  Figure 3 . 3 carry on the recursive decomposition approach of Bayardo and Pehousek, and combine it with clause learning and component caching.  ... 
doi:10.3233/sat190022 fatcat:pjqatho6qneovf3b3ll5zeh5ae
« Previous Showing results 1 — 15 out of 2,658 results