IA Scholar Query: Approximating Boolean functions by OBDDs.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgSat, 10 Sep 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Precise Quantitative Analysis of Binarized Neural Networks: A BDD-based Approach
https://scholar.archive.org/work/avaxygccq5bubhgl7t6zbrwk3q
As a new programming paradigm, neural network based machine learning has expanded its application to many real-world problems. Due to the black-box nature of neural networks, verifying and explaining their behavior is becoming increasingly important, especially when they are deployed in safety-critical applications. Existing verification work mostly focuses on qualitative verification which asks whether there exists an input (in a specified region) for a neural network such that a property (e.g., local robustness) is violated. However, in many practical applications, such an (adversarial) input almost surely exists which makes a qualitative answer less meaningful. In this work, we study a more interesting yet more challenging problem, i.e., quantitative verification of neural networks, which asks how often a property is satisfied or violated. We target binarized neural networks (BNNs), the 1-bit quantization of general neural networks. BNNs have attracted increasing attentions in deep learning recently, as they can drastically reduce memory storage and execution time with bit-wise operations, which is crucial in recourse-constrained scenarios, e.g., embedded devices for Internet of Things. Towards quantitative verification of BNNs, we propose a novel algorithmic approach for encoding BNNs as Binary Decision Diagrams (BDDs), a widely studied model in formal verification and knowledge representation. By exploiting the internal structure of the BNNs, our encoding translates the input-output relation of blocks in BNNs to cardinality constraints which are then encoded by BDDs. Based on the new BDD encoding, we develop a quantitative verification framework for BNNs where precise and comprehensive analysis of BNNs can be performed. To improve the scalability of BDD encoding, we also investigate parallelization strategies at various levels. We demonstrate applications of our framework by providing quantitative robustness verification and interpretability for BNNs. An extensive experimental evaluation confirms the effectiveness and efficiency of our approach.Yedi Zhang, Zhe Zhao, Guangke Chen, Fu Song, Taolue Chenwork_avaxygccq5bubhgl7t6zbrwk3qSat, 10 Sep 2022 00:00:00 GMTA Quantum Algorithm for Computing All Diagnoses of a Switching Circuit
https://scholar.archive.org/work/2qw2rvz4xnb2bhslcuppzfmfke
Faults are stochastic by nature while most man-made systems, and especially computers, work deterministically. This necessitates the linking of probability theory with mathematical logics, automata, and switching circuit theory. This paper provides such a connecting via quantum information theory which is an intuitive approach as quantum physics obeys probability laws. In this paper we provide a novel approach for computing diagnosis of switching circuits with gate-based quantum computers. The approach is based on the idea of putting the qubits representing faults in superposition and compute all, often exponentially many, diagnoses simultaneously. We empirically compare the quantum algorithm for diagnostics to an approach based on SAT and model-counting. For a benchmark of combinational circuits we establish an error of less than one percent in estimating the true probability of faults.Alexander Feldman, Johan de Kleer, Ion Mateiwork_2qw2rvz4xnb2bhslcuppzfmfkeThu, 08 Sep 2022 00:00:00 GMTInapproximability of a Pair of Forms Defining a Partial Boolean Function
https://scholar.archive.org/work/72qsqqt4xrgyhcbhk3ibq355be
We consider the problem of jointly minimizing forms of two Boolean functions f, g {0,1}^J →{0,1} such that f + g ≤ 1 and so as to separate disjoint sets A ∪ B ⊆{0,1}^J such that f(A) = {1} and g(B) = {1}. We hypothesize that this problem is easier to solve or approximate than the well-understood problem of minimizing the form of one Boolean function h: {0,1}^J →{0,1} such that h(A) = {1} and h(B) = {0}. For a large class of forms, including binary decision trees and ordered binary decision diagrams, we refute this hypothesis. For disjunctive normal forms, we show that the problem is at least as hard as MIN-SET-COVER. For all these forms, we establish that no o(ln (|A| + |B| -1))-approximation algorithm exists unless P=NP.David Stein, Bjoern Andreswork_72qsqqt4xrgyhcbhk3ibq355beThu, 08 Sep 2022 00:00:00 GMTFPGA Acceleration of Probabilistic Sentential Decision Diagrams with High-Level Synthesis
https://scholar.archive.org/work/m2m5q5ewjff57jlmnxyht6cwkm
Probabilistic Sentential Decision Diagrams (PSDDs) provide efficient methods for modeling and reasoning with probability distributions in the presence of massive logical constraints. PSDDs can also be synthesized from graphical models such as Bayesian networks (BNs) therefore offering a new set of tools for performing inference on these models (in time linear in the PSDD size). Despite these favorable characteristics of PSDDs, we have found multiple challenges in PSDD's FPGA acceleration. Problems include limited parallelism, data dependency, and small pipeline iterations. In this paper, we propose several optimization techniques to solve these issues with novel pipeline scheduling and parallelization schemes. We designed the PSDD kernel with a high-level synthesis (HLS) tool for ease of implementation and verified it on Xilinx Alveo U250 board. Experimental results show that our methods improve the baseline FPGA HLS implementation performance by 2,200X and the multicore CPU implementation by 20X. The proposed design also outperforms state-of-the-art BN and Sum Product Network (SPN) accelerators that store the graph information in memory.Young-kyu Choi, Carlos Santillana, Yujia Shen, Adnan Darwiche, Jason Congwork_m2m5q5ewjff57jlmnxyht6cwkmTue, 06 Sep 2022 00:00:00 GMTExplainability via Short Formulas: the Case of Propositional Logic with Implementation
https://scholar.archive.org/work/y4civzpzmjb2pgsjgw6is3m44u
We conceptualize explainability in terms of logic and formula size, giving a number of related definitions of explainability in a very general setting. Our main interest is the so-called special explanation problem which aims to explain the truth value of an input formula in an input model. The explanation is a formula of minimal size that (1) agrees with the input formula on the input model and (2) transmits the involved truth value to the input formula globally, i.e., on every model. As an important example case, we study propositional logic in this setting and show that the special explainability problem is complete for the second level of the polynomial hierarchy. We also provide an implementation of this problem in answer set programming and investigate its capacity in relation to explaining answers to the n-queens and dominating set problems.Reijo Jaakkola, Tomi Janhunen, Antti Kuusisto, Masood Feyzbakhsh Rankooh, Miikka Vilanderwork_y4civzpzmjb2pgsjgw6is3m44uSat, 03 Sep 2022 00:00:00 GMTTight Bounds for Tseitin Formulas
https://scholar.archive.org/work/rgbe3x7fabbuvofo4j4s3ni57a
We show that for any connected graph G the size of any regular resolution or OBDD(∧, reordering) refutation of a Tseitin formula based on G is at least 2^Ω(tw(G)), where tw(G) is the treewidth of G. These lower bounds improve upon the previously known bounds and, moreover, they are tight. For both of the proof systems, there are constructive upper bounds that almost match the obtained lower bounds, hence the class of Tseitin formulas is almost automatable for regular resolution and for OBDD(∧, reordering).Dmitry Itsykson, Artur Riazanov, Petr Smirnov, Kuldeep S. Meel, Ofer Strichmanwork_rgbe3x7fabbuvofo4j4s3ni57aThu, 28 Jul 2022 00:00:00 GMTThe White-Box Adversarial Data Stream Model
https://scholar.archive.org/work/o6xrp3b75jct5fhclkfxtqothm
We study streaming algorithms in the white-box adversarial model, where the stream is chosen adaptively by an adversary who observes the entire internal state of the algorithm at each time step. We show that nontrivial algorithms are still possible. We first give a randomized algorithm for the L_1-heavy hitters problem that outperforms the optimal deterministic Misra-Gries algorithm on long streams. If the white-box adversary is computationally bounded, we use cryptographic techniques to reduce the memory of our L_1-heavy hitters algorithm even further and to design a number of additional algorithms for graph, string, and linear algebra problems. The existence of such algorithms is surprising, as the streaming algorithm does not even have a secret key in this model, i.e., its state is entirely known to the adversary. One algorithm we design is for estimating the number of distinct elements in a stream with insertions and deletions achieving a multiplicative approximation and sublinear space; such an algorithm is impossible for deterministic algorithms. We also give a general technique that translates any two-player deterministic communication lower bound to a lower bound for randomized algorithms robust to a white-box adversary. In particular, our results show that for all p≥ 0, there exists a constant C_p>1 such that any C_p-approximation algorithm for F_p moment estimation in insertion-only streams with a white-box adversary requires Ω(n) space for a universe of size n. Similarly, there is a constant C>1 such that any C-approximation algorithm in an insertion-only stream for matrix rank requires Ω(n) space with a white-box adversary. Our algorithmic results based on cryptography thus show a separation between computationally bounded and unbounded adversaries. (Abstract shortened to meet arXiv limits.)Miklos Ajtai, Vladimir Braverman, T.S. Jayram, Sandeep Silwal, Alec Sun, David P. Woodruff, Samson Zhouwork_o6xrp3b75jct5fhclkfxtqothmSat, 23 Jul 2022 00:00:00 GMTQuantum Speedups for Treewidth
https://scholar.archive.org/work/qogq55ptingxfg33akagrkvjkm
In this paper, we study quantum algorithms for computing the exact value of the treewidth of a graph. Our algorithms are based on the classical algorithm by Fomin and Villanger (Combinatorica 32, 2012) that uses O(2.616ⁿ) time and polynomial space. We show three quantum algorithms with the following complexity, using QRAM in both exponential space algorithms: - O(1.618ⁿ) time and polynomial space; - O(1.554ⁿ) time and O(1.452ⁿ) space; - O(1.538ⁿ) time and space. In contrast, the fastest known classical algorithm for treewidth uses O(1.755ⁿ) time and space. The first two speed-ups are obtained in a fairly straightforward way. The first version uses additionally only Grover's search and provides a quadratic speedup. The second speedup is more time-efficient and uses both Grover's search and the quantum exponential dynamic programming by Ambainis et al. (SODA '19). The third version uses the specific properties of the classical algorithm and treewidth, with a modified version of the quantum dynamic programming on the hypercube. As a small side result, we give a new classical time-space tradeoff for computing treewidth in O^*(2ⁿ) time and O^*(√{2ⁿ}) space.Vladislavs Kļevickis, Krišjānis Prūsis, Jevgēnijs Vihrovs, François Le Gall, Tomoyuki Morimaework_qogq55ptingxfg33akagrkvjkmMon, 04 Jul 2022 00:00:00 GMTComputing expected multiplicities for bag-TIDBs with bounded multiplicities
https://scholar.archive.org/work/p7h62kunmjg7jp6wygmyfzz7be
In this work, we study the problem of computing a tuple's expected multiplicity over probabilistic databases with bag semantics (where each tuple is associated with a multiplicity) exactly and approximately. We consider bag-TIDBs where we have a bound c on the maximum multiplicity of each tuple and tuples are independent probabilistic events (we refer to such databases as c-TIDBs. We are specifically interested in the fine-grained complexity of computing expected multiplicities and how it compares to the complexity of deterministic query evaluation algorithms – if these complexities are comparable, it opens the door to practical deployment of probabilistic databases. Unfortunately, our results imply that computing expected multiplicities for c-TIDBs based on the results produced by such query evaluation algorithms introduces super-linear overhead (under parameterized complexity hardness assumptions/conjectures). We proceed to study approximation of expected result tuple multiplicities for positive relational algebra queries (RA^+) over c-TIDBs and for a non-trivial subclass of block-independent databases (BIDBs). We develop a sampling algorithm that computes a 1±ϵ approximation of the expected multiplicity of an output tuple in time linear in the runtime of the corresponding deterministic query for any RA^+ query.Su Feng, Boris Glavic, Aaron Huber, Oliver Kennedy, Atri Rudrawork_p7h62kunmjg7jp6wygmyfzz7beFri, 01 Jul 2022 00:00:00 GMTLower Bounds on Intermediate Results in Bottom-Up Knowledge Compilation
https://scholar.archive.org/work/xuagai4hz5dfhkd7ptorm77vxa
Bottom-up knowledge compilation is a paradigm for generating representations of functions by iteratively conjoining constraints using a so-called apply function. When the input is not efficiently compilable into a language - generally a class of circuits - because optimal compiled representations are provably large, the problem is not the compilation algorithm as much as the choice of a language too restrictive for the input. In contrast, in this paper, we look at CNF formulas for which very small circuits exists and look at the efficiency of their bottom-up compilation in one of the most general languages, namely that of structured decomposable negation normal forms (str-DNNF). We prove that, while the inputs have constant size representations as str-DNNF, any bottom-up compilation in the general setting where conjunction and structure modification are allowed takes exponential time and space, since large intermediate results have to be produced. This unconditionally proves that the inefficiency of bottom-up compilation resides in the bottom-up paradigm itself.Alexis de Colnet, Stefan Mengelwork_xuagai4hz5dfhkd7ptorm77vxaTue, 28 Jun 2022 00:00:00 GMTOn the Computation of Necessary and Sufficient Explanations
https://scholar.archive.org/work/ipctvlr4ozcdtl2nbbvrgzllce
The complete reason behind a decision is a Boolean formula that characterizes why the decision was made. This recently introduced notion has a number of applications, which include generating explanations, detecting decision bias and evaluating counterfactual queries. Prime implicants of the complete reason are known as sufficient reasons for the decision and they correspond to what is known as PI explanations and abductive explanations. In this paper, we refer to the prime implicates of a complete reason as necessary reasons for the decision. We justify this terminology semantically and show that necessary reasons correspond to what is known as contrastive explanations. We also study the computation of complete reasons for multi-class decision trees and graphs with nominal and numeric features for which we derive efficient, closed-form complete reasons. We further investigate the computation of shortest necessary and sufficient reasons for a broad class of complete reasons, which include the derived closed forms and the complete reasons for Sentential Decision Diagrams (SDDs). We provide an algorithm which can enumerate their shortest necessary reasons in output polynomial time. Enumerating shortest sufficient reasons for this class of complete reasons is hard even for a single reason. For this problem, we provide an algorithm that appears to be quite efficient as we show empirically.Adnan Darwiche, Chunxi Jiwork_ipctvlr4ozcdtl2nbbvrgzllceTue, 28 Jun 2022 00:00:00 GMTOn the Tractability of SHAP Explanations
https://scholar.archive.org/work/ejbkobaitfgrbhrgamafmbw7ue
SHAP explanations are a popular feature-attribution mechanism for explainable AI. They use game-theoretic notions to measure the influence of individual features on the prediction of a machine learning model. Despite a lot of recent interest from both academia and industry, it is not known whether SHAP explanations of common machine learning models can be computed efficiently. In this paper, we establish the complexity of computing the SHAP explanation in three important settings. First, we consider fully-factorized data distributions, and show that the complexity of computing the SHAP explanation is the same as the complexity of computing the expected value of the model. This fully-factorized setting is often used to simplify the SHAP computation, yet our results show that the computation can be intractable for commonly used models such as logistic regression. Going beyond fully-factorized distributions, we show that computing SHAP explanations is already intractable for a very simple setting: computing SHAP explanations of trivial classifiers over naive Bayes distributions. Finally, we show that even computing SHAP over the empirical distribution is #P-hard.Guy Van den Broeck, Anton Lykov, Maximilian Schleich, Dan Suciuwork_ejbkobaitfgrbhrgamafmbw7ueThu, 23 Jun 2022 00:00:00 GMTFlexible FOND Planning with Explicit Fairness Assumptions
https://scholar.archive.org/work/oohi4xqyebfa7lxe5hlojq3dte
We consider the problem of reaching a propositional goal condition in fully-observable nondeterministic (FOND) planning under a general class of fairness assumptions that are given explicitly. The fairness assumptions are of the form A/B and say that state trajectories that contain infinite occurrences of an action a from A in a state s and finite occurrence of actions from B, must also contain infinite occurrences of action a in s followed by each one of its possible outcomes. The infinite trajectories that violate this condition are deemed as unfair, and the solutions are policies for which all the fair trajectories reach a goal state. We show that strong and strong-cyclic FOND planning, as well as QNP planning, a planning model introduced recently for generalized planning, are all special cases of FOND planning with fairness assumptions of this form which can also be combined. FOND+ planning, as this form of planning is called, combines the syntax of FOND planning with some of the versatility of LTL for expressing fairness constraints. A sound and complete FOND+ planner is implemented by reducing FOND+ planning to answer set programs, and its performance is evaluated in comparison with FOND and QNP planners, and LTL synthesis tools. Two other FOND+ planners are introduced as well which are more scalable but are not complete.Ivan D. Rodriguez, Blai Bonet, Sebastian Sardina, Hector Geffnerwork_oohi4xqyebfa7lxe5hlojq3dteThu, 23 Jun 2022 00:00:00 GMTSemantic Probabilistic Layers for Neuro-Symbolic Learning
https://scholar.archive.org/work/wl4j4rqvivgexgguz3onqivovi
We design a predictive layer for structured-output prediction (SOP) that can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints. Our Semantic Probabilistic Layer (SPL) can model intricate correlations, and hard constraints, over a structured output space all while being amenable to end-to-end learning via maximum likelihood. SPLs combine exact probabilistic inference with logical reasoning in a clean and modular way, learning complex distributions and restricting their support to solutions of the constraint. As such, they can faithfully, and efficiently, model complex SOP tasks beyond the reach of alternative neuro-symbolic approaches. We empirically demonstrate that SPLs outperform these competitors in terms of accuracy on challenging SOP tasks including hierarchical multi-label classification, pathfinding and preference learning, while retaining perfect constraint satisfaction.Kareem Ahmed, Stefano Teso, Kai-Wei Chang, Guy Van den Broeck, Antonio Vergariwork_wl4j4rqvivgexgguz3onqivoviWed, 01 Jun 2022 00:00:00 GMTTractable Boolean and Arithmetic Circuits
https://scholar.archive.org/work/lrm34sqt5fhvnfvnaagkdd2o4m
Tractable Boolean and arithmetic circuits have been studied extensively in AI for over two decades now. These circuits were initially proposed as "compiled objects," meant to facilitate logical and probabilistic reasoning, as they permit various types of inference to be performed in linear-time and a feed-forward fashion like neural networks. In more recent years, the role of tractable circuits has significantly expanded as they became a computational and semantical backbone for some approaches that aim to integrate knowledge, reasoning and learning. In this article, we review the foundations of tractable circuits and some associated milestones, while focusing on their core properties and techniques that make them particularly useful for the broad aims of neuro-symbolic AI.Adnan Darwichework_lrm34sqt5fhvnfvnaagkdd2o4mMon, 07 Feb 2022 00:00:00 GMTQuantum speedups for treewidth
https://scholar.archive.org/work/u342kec5xrc63aq5sxvakay5na
In this paper, we study quantum algorithms for computing the exact value of the treewidth of a graph. Our algorithms are based on the classical algorithm by Fomin and Villanger (Combinatorica 32, 2012) that uses $O(2.616^n)$ time and polynomial space. We show three quantum algorithms with the following complexity, using QRAM in both exponential space algorithms: $\bullet$ $O(1.618^n)$ time and polynomial space; $\bullet$ $O(1.554^n)$ time and $O(1.452^n)$ space; $\bullet$ $O(1.538^n)$ time and space. In contrast, the fastest known classical algorithm for treewidth uses $O(1.755^n)$ time and space. The first two speed-ups are obtained in a fairly straightforward way. The first version uses additionally only Grover's search and provides a quadratic speedup. The second speedup is more time-efficient and uses both Grover's search and the quantum exponential dynamic programming by Ambainis et al. (SODA '19). The third version uses the specific properties of the classical algorithm and treewidth, with a modified version of the quantum dynamic programming on the hypercube. Lastly, as a small side result, we also give a new classical time-space tradeoff for computing treewidth in $O^*(2^n)$ time and $O^*(\sqrt{2^n})$ space.Vladislavs Kļevickis, Krišjānis Prūsis, Jevgēnijs Vihrovswork_u342kec5xrc63aq5sxvakay5naTue, 01 Feb 2022 00:00:00 GMTLarge Neighbourhood Search for Anytime MaxSAT Solving
https://scholar.archive.org/work/fcyj4qjucneqjmj74npbufrkrm
Large Neighbourhood Search (LNS) is an algorithmic framework for optimization problems that can yield good performance in many domains. In this paper, we present a method for applying LNS to improve anytime maximum satisfiability (MaxSAT) solving by introducing a neighbourhood selection policy that shows good empirical performance. We show that our LNS solver can often improve the suboptimal solutions produced by other anytime MaxSAT solvers. When starting with a suboptimal solution of reasonable quality, our approach often finds a better solution than the original anytime solver can achieve. We demonstrate that implementing our LNS solver on top of three different state-of-the-art anytime solvers improves the anytime performance of all three solvers within the standard time limit used in the incomplete tracks of the annual MaxSAT Evaluation.Randy Hickey, Fahiem Bacchuswork_fcyj4qjucneqjmj74npbufrkrmLower Bounds on Intermediate Results in Bottom-Up Knowledge Compilation
https://scholar.archive.org/work/reeiiwg6ybh4diuhlgu7vtilkq
Bottom-up knowledge compilation is a paradigm for generating representations of functions by iteratively conjoining constraints using a so-called apply function. When the input is not efficiently compilable into a language - generally a class of circuits - because optimal compiled representations are provably large, the problem is not the compilation algorithm as much as the choice of a language too restrictive for the input. In contrast, in this paper, we look at CNF formulas for which very small circuits exists and look at the efficiency of their bottom-up compilation in one of the most general languages, namely that of structured decomposable negation normal forms (str-DNNF). We prove that, while the inputs have constant size representations as str-DNNF, any bottom-up compilation in the general setting where conjunction and structure modification are allowed takes exponential time and space, since large intermediate results have to be produced. This unconditionally proves that the inefficiency of bottom-up compilation resides in the bottom-up paradigm itself.Alexis de Colnet, Stefan Mengelwork_reeiiwg6ybh4diuhlgu7vtilkqThu, 23 Dec 2021 00:00:00 GMTAggressive aggregation
https://scholar.archive.org/work/dunzzstazbdm7djen45jxys4de
Among the first steps in a compilation pipeline is the construction of an Intermediate Representation (IR), an in-memory representation of the input program. Any attempt to program optimisation, both in terms of size and running time, has to operate on this structure. There may be one or multiple such IRs, however, most compilers use some form of a Control Flow Graph (CFG) internally. This representation clearly aims at general-purpose programming languages, for which it is well suited and allows for many classical program optimisations. On the other hand, a growing structural difference between the input program and the chosen IR can lose or obfuscate information that can be crucial for effective optimisation. With today's rise of a multitude of different programming languages, Domain-Specific Languages (DSLs), and computing platforms, the classical machine-oriented IR is reaching its limits and a broader variety of IRs is needed. This realisation yielded, e.g., Multi-Level Intermediate Representation (MLIR), a compiler framework that facilitates the creation of a wide range of IRs and encourages their reuse among different programming languages and the corresponding compilers. In this modern spirit, this dissertation explores the potential of Algebraic Decision Diagrams (ADDs) as an IR for (domain-specific) program optimisation. The data structure remains the state of the art for Boolean function representation for more than thirty years and is well-known for its optimality in size and depth, i.e. running time. As such, it is ideally suited to represent the corresponding classes of programs in the role of an IR. We will discuss its application in a variety of different program domains, ranging from DSLs to machine-learned programs and even to general-purpose programming languages. Two representatives for DSLs, a graphical and a textual one, prove the adequacy of ADDs for the program optimisation of modelled decision services. The resulting DSLs facilitate experimentation with ADDs and provide valuable insight i [...]Frederik Jakob Gossen, Technische Universität Dortmundwork_dunzzstazbdm7djen45jxys4deMon, 20 Dec 2021 00:00:00 GMTTesting Probabilistic Circuits
https://scholar.archive.org/work/2mylbvpqmrhdbnuuix2smezbci
Probabilistic circuits (PCs) are a powerful modeling framework for representing tractable probability distributions over combinatorial spaces. In machine learning and probabilistic programming, one is often interested in understanding whether the distributions learned using PCs are close to the desired distribution. Thus, given two probabilistic circuits, a fundamental problem of interest is to determine whether their distributions are close to each other. The primary contribution of this paper is a closeness test for PCs with respect to the total variation distance metric. Our algorithm utilizes two common PC queries, counting and sampling. In particular, we provide a poly-time probabilistic algorithm to check the closeness of two PCs when the PCs support tractable approximate counting and sampling. We demonstrate the practical efficiency of our algorithmic framework via a detailed experimental evaluation of a prototype implementation against a set of 475 PC benchmarks. We find that our test correctly decides the closeness of all 475 PCs within 3600 seconds.Yash Pote, Kuldeep S. Meelwork_2mylbvpqmrhdbnuuix2smezbciThu, 09 Dec 2021 00:00:00 GMT