IA Scholar Query: Semidefinite Programming and Approximation Algorithms: A Survey.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgMon, 03 Oct 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440High-dimensional Censored Regression via the Penalized Tobit Likelihood
https://scholar.archive.org/work/rpfgygmw6jhf3ldvwa6bmppr7y
High-dimensional regression and regression with a left-censored response are each well-studied topics. In spite of this, few methods have been proposed which deal with both of these complications simultaneously. The Tobit model – long the standard method for censored regression in economics – has not been adapted for high-dimensional regression at all. To fill this gap and bring up-to-date techniques from high-dimensional statistics to the field of high-dimensional left-censored regression, we propose several penalized Tobit models. We develop a fast algorithm which combines quadratic minimization with coordinate descent to compute the penalized Tobit solution path. Theoretically, we analyze the Tobit lasso and Tobit with a folded concave penalty, bounding the ℓ_2 estimation loss for the former and proving that a local linear approximation estimator for the latter possesses the strong oracle property. Through an extensive simulation study, we find that our penalized Tobit models provide more accurate predictions and parameter estimates than other methods. We use a penalized Tobit model to analyze high-dimensional left-censored HIV viral load data from the AIDS Clinical Trials Group and identify potential drug resistance mutations in the HIV genome. Appendices contain intermediate theoretical results and technical proofs.Tate Jacobson, Hui Zouwork_rpfgygmw6jhf3ldvwa6bmppr7yMon, 03 Oct 2022 00:00:00 GMTQuantum Error Mitigation
https://scholar.archive.org/work/wl3a3jdslrfjtp6ln5oxgvmv7m
For quantum computers to successfully solve real-world problems, it is necessary to tackle the challenge of noise: the errors which occur in elementary physical components due to unwanted or imperfect interactions. The theory of quantum fault tolerance can provide an answer in the long term, but in the coming era of 'NISQ' machines we must seek to mitigate errors rather than completely remove them. This review surveys the diverse methods that have been proposed for quantum error mitigation, assesses their in-principle efficacy, and then describes the hardware demonstrations achieved to date. We identify the commonalities and limitations among the methods, noting how mitigation methods can be chosen according to the primary type of noise present, including algorithmic errors. Open problems in the field are identified and we discuss the prospects for realising mitigation-based devices that can deliver quantum advantage with an impact on science and business.Zhenyu Cai, Ryan Babbush, Simon C. Benjamin, Suguru Endo, William J. Huggins, Ying Li, Jarrod R. McClean, Thomas E. O'Brienwork_wl3a3jdslrfjtp6ln5oxgvmv7mMon, 03 Oct 2022 00:00:00 GMTPolynomials with Lorentzian Signature, and Computing Permanents via Hyperbolic Programming
https://scholar.archive.org/work/u25rnqynrngu7jsyhzvgjavg5i
We study the class of polynomials whose Hessians evaluated at any point of a closed convex cone have Lorentzian signature. This class is a generalization to the remarkable class of Lorentzian polynomials. We prove that hyperbolic polynomials and conic stable polynomials belong to this class, and the set of polynomials with Lorentzian signature is closed. Finally, we develop a method for computing permanents of nonsingular matrices which belong to a class that includes nonsingular k-locally singular matrices via hyperbolic programming.Papri Deywork_u25rnqynrngu7jsyhzvgjavg5iSat, 01 Oct 2022 00:00:00 GMTShuffled linear regression through graduated convex relaxation
https://scholar.archive.org/work/uv7yruarrrdb5ielwkklka7s4a
The shuffled linear regression problem aims to recover linear relationships in datasets where the correspondence between input and output is unknown. This problem arises in a wide range of applications including survey data, in which one needs to decide whether the anonymity of the responses can be preserved while uncovering significant statistical connections. In this work, we propose a novel optimization algorithm for shuffled linear regression based on a posterior-maximizing objective function assuming Gaussian noise prior. We compare and contrast our approach with existing methods on synthetic and real data. We show that our approach performs competitively while achieving empirical running-time improvements. Furthermore, we demonstrate that our algorithm is able to utilize the side information in the form of seeds, which recently came to prominence in related problems.Efe Onaran, Soledad Villarwork_uv7yruarrrdb5ielwkklka7s4aFri, 30 Sep 2022 00:00:00 GMTOn Quantum Speedups for Nonconvex Optimization via Quantum Tunneling Walks
https://scholar.archive.org/work/cpzrkwavdvckvmpkoq46drldom
Classical algorithms are often not effective for solving nonconvex optimization problems where local minima are separated by high barriers. In this paper, we explore possible quantum speedups for nonconvex optimization by leveraging the global effect of quantum tunneling. Specifically, we introduce a quantum algorithm termed the quantum tunneling walk (QTW) and apply it to nonconvex problems where local minima are approximately global minima. We show that QTW achieves quantum speedup over classical stochastic gradient descents (SGD) when the barriers between different local minima are high but thin and the minima are flat. Based on this observation, we construct a specific double-well landscape, where classical algorithms cannot efficiently hit one target well knowing the other well but QTW can when given proper initial states near the known well. Finally, we corroborate our findings with numerical experiments.Yizhou Liu, Weijie J. Su, Tongyang Liwork_cpzrkwavdvckvmpkoq46drldomThu, 29 Sep 2022 00:00:00 GMTUnique Games hardness of Quantum Max-Cut, and a conjectured vector-valued Borell's inequality
https://scholar.archive.org/work/skytuoytojh4tku23vzh2smgum
The Gaussian noise stability of a function f:ℝ^n →{-1, 1} is the expected value of f(x) · f(y) over ρ-correlated Gaussian random variables x and y. Borell's inequality states that for -1 ≤ρ≤ 0, this is minimized by the halfspace f(x) = sign(x_1). In this work, we generalize this result to hold for functions f:ℝ^n → S^k-1 which output k-dimensional unit vectors. Our main conjecture, which we call the vector-valued Borell's inequality, asserts that the expected value of ⟨ f(x), f(y)⟩ is minimized by the function f(x) = x_≤ k / ‖ x_≤ k‖, where x_≤ k = (x_1, ..., x_k). We give several pieces of evidence in favor of this conjecture, including a proof that it does indeed hold in the special case of n = k. As an application of this conjecture, we show that it implies several hardness of approximation results for a special case of the local Hamiltonian problem related to the anti-ferromagnetic Heisenberg model known as Quantum Max-Cut. This can be viewed as a natural quantum analogue of the classical Max-Cut problem and has been proposed as a useful testbed for developing algorithms. We show the following, assuming our conjecture: (1) The integrality gap of the basic SDP is 0.498, matching an existing rounding algorithm. Combined with existing results, this shows that the basic SDP does not achieve the optimal approximation ratio. (2) It is Unique Games-hard (UG-hard) to compute a (0.956+ε)-approximation to the value of the best product state, matching an existing approximation algorithm. (3) It is UG-hard to compute a (0.956+ε)-approximation to the value of the best (possibly entangled) state.Yeongwoo Hwang, Joe Neeman, Ojas Parekh, Kevin Thompson, John Wrightwork_skytuoytojh4tku23vzh2smgumWed, 28 Sep 2022 00:00:00 GMTSurvey Descent: A Multipoint Generalization of Gradient Descent for Nonsmooth Optimization
https://scholar.archive.org/work/73uii2nbh5gypmqocm3omqz7re
For strongly convex objectives that are smooth, the classical theory of gradient descent ensures linear convergence relative to the number of gradient evaluations. An analogous nonsmooth theory is challenging. Even when the objective is smooth at every iterate, the corresponding local models are unstable and the number of cutting planes invoked by traditional remedies is difficult to bound, leading to convergences guarantees that are sublinear relative to the cumulative number of gradient evaluations. We instead propose a multipoint generalization of the gradient descent iteration for local optimization. While designed with general objectives in mind, we are motivated by a "max-of-smooth" model that captures the subdifferential dimension at optimality. We prove linear convergence when the objective is itself max-of-smooth, and experiments suggest a more general phenomenon.X.Y. Han, Adrian S. Lewiswork_73uii2nbh5gypmqocm3omqz7reTue, 27 Sep 2022 00:00:00 GMTOn Embeddings and Inverse Embeddings of Input Design for Regularized System Identification
https://scholar.archive.org/work/euw7rxxsrvanff4flkd7i6df4a
Input design is an important problem for system identification and has been well studied for the classical system identification, i.e., the maximum likelihood/prediction error method. For the emerging regularized system identification, the study on input design has just started, and it is often formulated as a non-convex optimization problem that minimizes a scalar measure of the Bayesian mean squared error matrix subject to certain constraints, and the state-of-art method is the so-called quadratic mapping and inverse embedding (QMIE) method, where a time domain inverse embedding (TDIE) is proposed to find the inverse of the quadratic mapping. In this paper, we report some new results on the embeddings/inverse embeddings of the QMIE method. Firstly, we present a general result on the frequency domain inverse embedding (FDIE) that is to find the inverse of the quadratic mapping described by the discrete-time Fourier transform. Then we show the relation between the TDIE and the FDIE from a graph signal processing perspective. Finally, motivated by this perspective, we further propose a graph induced embedding and its inverse, which include the previously introduced embeddings as special cases. This deepens the understanding of input design from a new viewpoint beyond the real domain and the frequency domain viewpoints.Biqiang Mu, Tianshi Chen, He Kong, Bo Jiang, Lei Wang, Junfeng Wuwork_euw7rxxsrvanff4flkd7i6df4aTue, 27 Sep 2022 00:00:00 GMTLearning Variational Models with Unrolling and Bilevel Optimization
https://scholar.archive.org/work/odozbe4izrg4hl3nup53ash76m
In this paper we consider the problem learning of variational models in the context of supervised learning via risk minimization. Our goal is to provide a deeper understanding of the two approaches of learning of variational models via bilevel optimization and via algorithm unrolling. The former considers the variational model as a lower level optimization problem below the risk minimization problem, while the latter replaces the lower level optimization problem by an algorithm that solves said problem approximately. Both approaches are used in practice, but, unrolling is much simpler from a computational point of view. To analyze and compare the two approaches, we consider a simple toy model, and compute all risks and the respective estimators explicitly. We show that unrolling can be better than the bilevel optimization approach, but also that the performance of unrolling can depend significantly on further parameters, sometimes in unexpected ways: While the stepsize of the unrolled algorithm matters a lot, the number of unrolled iterations only matters if the number is even or odd, and these two cases are notably different.Christoph Brauer, Niklas Breustedt, Timo de Wolff, Dirk A. Lorenzwork_odozbe4izrg4hl3nup53ash76mTue, 27 Sep 2022 00:00:00 GMTAn Overview and Prospective Outlook on Robust Training and Certification of Machine Learning Models
https://scholar.archive.org/work/ngbqmvvyh5hktpa3g2ovnapftm
In this discussion paper, we survey recent research surrounding robustness of machine learning models. As learning algorithms become increasingly more popular in data-driven control systems, their robustness to data uncertainty must be ensured in order to maintain reliable safety-critical operations. We begin by reviewing common formalisms for such robustness, and then move on to discuss popular and state-of-the-art techniques for training robust machine learning models as well as methods for provably certifying such robustness. From this unification of robust machine learning, we identify and discuss pressing directions for future research in the area.Brendon G. Anderson, Tanmay Gautam, Somayeh Sojoudiwork_ngbqmvvyh5hktpa3g2ovnapftmTue, 27 Sep 2022 00:00:00 GMTSequential Convex Programming For Non-Linear Stochastic Optimal Control
https://scholar.archive.org/work/unx2pab3mbgkhff4yq5szq5w7q
This work introduces a sequential convex programming framework for non-linear, finite-dimensional stochastic optimal control, where uncertainties are modeled by a multidimensional Wiener process. We prove that any accumulation point of the sequence of iterates generated by sequential convex programming is a candidate locally-optimal solution for the original problem in the sense of the stochastic Pontryagin Maximum Principle. Moreover, we provide sufficient conditions for the existence of at least one such accumulation point. We then leverage these properties to design a practical numerical method for solving non-linear stochastic optimal control problems based on a deterministic transcription of stochastic sequential convex programming.Riccardo Bonalli, Thomas Lew, Marco Pavonework_unx2pab3mbgkhff4yq5szq5w7qMon, 26 Sep 2022 00:00:00 GMTGlobal Optimization for Cardinality-constrained Minimum Sum-of-Squares Clustering via Semidefinite Programming
https://scholar.archive.org/work/4ndqu6umsjehxbkevjw4ylddoi
The minimum sum-of-squares clustering (MSSC), or k-means type clustering, has been recently extended to exploit prior knowledge on the cardinality of each cluster. Such knowledge is used to increase performance as well as solution quality. In this paper, we propose an exact approach based on the branch-and-cut technique to solve the cardinality-constrained MSSC. For the lower bound routine, we use the semidefinite programming (SDP) relaxation recently proposed by Rujeerapaiboon et al. [SIAM J. Optim. 29(2), 1211-1239, (2019)]. However, this relaxation can be used in a branch-and-cut method only for small-size instances. Therefore, we derive a new SDP relaxation that scales better with the instance size and the number of clusters. In both cases, we strengthen the bound by adding polyhedral cuts. Benefiting from a tailored branching strategy which enforces pairwise constraints, we reduce the complexity of the problems arising in the children nodes. For the upper bound, instead, we present a local search procedure that exploits the solution of the SDP relaxation solved at each node. Computational results show that the proposed algorithm globally solves, for the first time, real-world instances of size 10 times larger than those solved by state-of-the-art exact methods.Veronica Piccialli, Antonio M. Sudosowork_4ndqu6umsjehxbkevjw4ylddoiSun, 25 Sep 2022 00:00:00 GMTVerifiability of the Data-Driven Variational Multiscale Reduced Order Model
https://scholar.archive.org/work/zqdym5tjwzbhreuywb2jaafh6u
In this paper, we focus on the mathematical foundations of reduced order model (ROM) closures. First, we extend the verifiability concept from large eddy simulation to the ROM setting. Specifically, we call a ROM closure model verifiable if a small ROM closure model error (i.e., a small difference between the true ROM closure and the modeled ROM closure) implies a small ROM error. Second, we prove that a data-driven ROM closure (i.e., the data-driven variational multiscale ROM) is verifiable. Finally, we investigate the verifiability of the data-driven variational multiscale ROM in the numerical simulation of the one-dimensional Burgers equation and a two-dimensional flow past a circular cylinder at Reynolds numbers Re=100 and Re=1000.Birgul Koc, Changhong Mou, Honghu Liu, Zhu Wang, Gianluigi Rozza, Traian Iliescuwork_zqdym5tjwzbhreuywb2jaafh6uSat, 24 Sep 2022 00:00:00 GMTFaster Randomized Interior Point Methods for Tall/Wide Linear Programs
https://scholar.archive.org/work/2l2c3uzsdrag3pafgk35lnb4lm
Linear programming (LP) is an extremely useful tool which has been successfully applied to solve various problems in a wide range of areas, including operations research, engineering, economics, or even more abstract mathematical areas such as combinatorics. It is also used in many machine learning applications, such as ℓ_1-regularized SVMs, basis pursuit, nonnegative matrix factorization, etc. Interior Point Methods (IPMs) are one of the most popular methods to solve LPs both in theory and in practice. Their underlying complexity is dominated by the cost of solving a system of linear equations at each iteration. In this paper, we consider both feasible and infeasible IPMs for the special case where the number of variables is much larger than the number of constraints. Using tools from Randomized Linear Algebra, we present a preconditioning technique that, when combined with the iterative solvers such as Conjugate Gradient or Chebyshev Iteration, provably guarantees that IPM algorithms (suitably modified to account for the error incurred by the approximate solver), converge to a feasible, approximately optimal solution, without increasing their iteration complexity. Our empirical evaluations verify our theoretical results on both real-world and synthetic data.Agniva Chowdhury, Gregory Dexter, Palma London, Haim Avron, Petros Drineaswork_2l2c3uzsdrag3pafgk35lnb4lmFri, 23 Sep 2022 00:00:00 GMTGradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization
https://scholar.archive.org/work/25vot5vcnfgwnots6s7jzandby
Nonsmooth nonconvex optimization problems broadly emerge in machine learning and business decision making, whereas two core challenges impede the development of efficient solution methods with finite-time convergence guarantee: the lack of computationally tractable optimality criterion and the lack of computationally powerful oracles. The contributions of this paper are two-fold. First, we establish the relationship between the celebrated Goldstein subdifferential and uniform smoothing, thereby providing the basis and intuition for the design of gradient-free methods that guarantee the finite-time convergence to a set of Goldstein stationary points. Second, we propose the gradient-free method (GFM) and stochastic GFM for solving a class of nonsmooth nonconvex optimization problems and prove that both of them can return a (δ,ϵ)-Goldstein stationary point of a Lipschitz function f at an expected convergence rate at O(d^3/2δ^-1ϵ^-4) where d is the problem dimension. Two-phase versions of GFM and SGFM are also proposed and proven to achieve improved large-deviation results. Finally, we demonstrate the effectiveness of 2-SGFM on training ReLU neural networks with the Minst dataset.Tianyi Lin, Zeyu Zheng, Michael I. Jordanwork_25vot5vcnfgwnots6s7jzandbyFri, 23 Sep 2022 00:00:00 GMTNoise Stability of Ranked Choice Voting
https://scholar.archive.org/work/tfm7q2o6irbttjuszf6un5pmsa
We conjecture that Borda count is the ranked choice voting method that best preserves the outcome of an election with randomly corrupted votes, among all fair voting methods with small influences satisfying the Condorcet Loser Criterion. This conjecture is an adaptation of the Plurality is Stablest Conjecture to the setting of ranked choice voting. Since the plurality function does not satisfy the Condorcet Loser Criterion, our new conjecture is not directly related to the Plurality is Stablest Conjecture. Nevertheless, we show that the Plurality is Stablest Conjecture implies our new Borda count is Stablest conjecture. We therefore deduce that Borda count is stablest for elections with three candidates when the corrupted votes are nearly uncorrelated with the original votes. We also adapt a dimension reduction argument to this setting, showing that the optimal ranked choice voting method is "low-dimensional." The Condorcet Loser Criterion asserts that a candidate must lose an election if each other candidate is preferred in head-to-head comparisons. Lastly, we discuss a variant of our conjecture with the Condorcet Winner Criterion as a constraint instead of the Condorcet Loser Criterion. In this case, we have no guess for the most stable ranked choice voting method.Steven Heilmanwork_tfm7q2o6irbttjuszf6un5pmsaThu, 22 Sep 2022 00:00:00 GMTHyperstable Sets with Voting and Algorithmic Hardness Applications
https://scholar.archive.org/work/vqsrhcujdbfn7pltt6izwozhre
The noise stability of a Euclidean set A with correlation ρ is the probability that (X,Y)∈ A× A, where X,Y are standard Gaussian random vectors with correlation ρ∈(0,1). It is well-known that a Euclidean set of fixed Gaussian volume that maximizes noise stability must be a half space. For a partition of Euclidean space into m>2 parts each of Gaussian measure 1/m, it is still unknown what sets maximize the sum of their noise stabilities. In this work, we classify partitions maximizing noise stability that are also critical points for the derivative of noise stability with respect to ρ. We call a partition satisfying these conditions hyperstable. Uner the assumption that a maximizing partition is hyperstable, we prove: * a (conditional) version of the Plurality is Stablest Conjecture for 3 or 4 candidates. * a (conditional) sharp Unique Games Hardness result for MAX-m-CUT for m=3 or 4 * a (conditional) version of the Propeller Conjecture of Khot and Naor for 4 sets. We also show that a symmetric set that is hyperstable must be star-shaped. For partitions of Euclidean space into m>2 parts of fixed (but perhaps unequal) Gaussian measure, the hyperstable property can only be satisfied when all of the parts have Gaussian measure 1/m. So, as our main contribution, we have identified a possible strategy for proving the full Plurality is Stablest Conjecture and the full sharp hardness for MAX-m-CUT: to prove both statements, it suffices to show that sets maximizing noise stability are hyperstable. This last point is crucial since any proof of the Plurality is Stablest Conjecture must use a property that is special to partitions of sets into equal measures, since the conjecture is false in the unequal measure case.Steven Heilmanwork_vqsrhcujdbfn7pltt6izwozhreThu, 22 Sep 2022 00:00:00 GMTLearning-Augmented Algorithms for Online Linear and Semidefinite Programming
https://scholar.archive.org/work/spz7ak4xpfgp3cgt53ingvgdj4
Semidefinite programming (SDP) is a unifying framework that generalizes both linear programming and quadratically-constrained quadratic programming, while also yielding efficient solvers, both in theory and in practice. However, there exist known impossibility results for approximating the optimal solution when constraints for covering SDPs arrive in an online fashion. In this paper, we study online covering linear and semidefinite programs in which the algorithm is augmented with advice from a possibly erroneous predictor. We show that if the predictor is accurate, we can efficiently bypass these impossibility results and achieve a constant-factor approximation to the optimal solution, i.e., consistency. On the other hand, if the predictor is inaccurate, under some technical conditions, we achieve results that match both the classical optimal upper bounds and the tight lower bounds up to constant factors, i.e., robustness. More broadly, we introduce a framework that extends both (1) the online set cover problem augmented with machine-learning predictors, studied by Bamas, Maggiori, and Svensson (NeurIPS 2020), and (2) the online covering SDP problem, initiated by Elad, Kale, and Naor (ICALP 2016). Specifically, we obtain general online learning-augmented algorithms for covering linear programs with fractional advice and constraints, and initiate the study of learning-augmented algorithms for covering SDP problems. Our techniques are based on the primal-dual framework of Buchbinder and Naor (Mathematics of Operations Research, 34, 2009) and can be further adjusted to handle constraints where the variables lie in a bounded region, i.e., box constraints.Elena Grigorescu, Young-San Lin, Sandeep Silwal, Maoyuan Song, Samson Zhouwork_spz7ak4xpfgp3cgt53ingvgdj4Wed, 21 Sep 2022 00:00:00 GMTExploiting ideal-sparsity in the generalized moment problem with application to matrix factorization ranks
https://scholar.archive.org/work/m5gsvmt2ovf4zex6eyfssquczi
We explore a new type of sparsity for the generalized moment problem (GMP) that we call ideal-sparsity. This sparsity exploits the presence of equality constraints requiring the measure to be supported on the variety of an ideal generated by bilinear monomials modeled by an associated graph. We show that this enables an equivalent sparse reformulation of the GMP, where the single (high dimensional) measure variable is replaced by several (lower-dimensional) measure variables supported on the maximal cliques of the graph. We explore the resulting hierarchies of moment-based relaxations for the original dense formulation of GMP and this new, equivalent ideal-sparse reformulation, when applied to the problem of bounding nonnegative- and completely positive matrix factorization ranks. We show that the ideal-sparse hierarchies provide bounds that are at least as good (and often tighter) as those obtained from the dense hierarchy. This is in sharp contrast to the situation when exploiting correlative sparsity, as is most common in the literature, where the resulting bounds are weaker than the dense bounds. Moreover, while correlative sparsity requires the underlying graph to be chordal, no such assumption is needed for ideal-sparsity. Numerical results show that the ideal-sparse bounds are often tighter and much faster to compute than their dense analogs.Milan Korda and Monique Laurent and Victor Magron and Andries Steenkampwork_m5gsvmt2ovf4zex6eyfssqucziTue, 20 Sep 2022 00:00:00 GMTEncoding inductive invariants as barrier certificates: synthesis via difference-of-convex programming
https://scholar.archive.org/work/chn524ah5verdmpy6ki35rbzxq
A barrier certificate often serves as an inductive invariant that isolates an unsafe region from the reachable set of states, and hence is widely used in proving safety of hybrid systems possibly over an infinite time horizon. We present a novel condition on barrier certificates, termed the invariant barrier-certificate condition, that witnesses unbounded-time safety of differential dynamical systems. The proposed condition is the weakest possible one to attain inductive invariance. We show that discharging the invariant barrier-certificate condition -- thereby synthesizing invariant barrier certificates -- can be encoded as solving an optimization problem subject to bilinear matrix inequalities (BMIs). We further propose a synthesis algorithm based on difference-of-convex programming, which approaches a local optimum of the BMI problem via solving a series of convex optimization problems. This algorithm is incorporated in a branch-and-bound framework that searches for the global optimum in a divide-and-conquer fashion. We present a weak completeness result of our method, namely, a barrier certificate is guaranteed to be found (under some mild assumptions) whenever there exists an inductive invariant (in the form of a given template) that suffices to certify safety of the system. Experimental results on benchmarks demonstrate the effectiveness and efficiency of our approach.Qiuye Wang, Mingshuai Chen, Bai Xue, Naijun Zhan, Joost-Pieter Katoenwork_chn524ah5verdmpy6ki35rbzxqTue, 20 Sep 2022 00:00:00 GMT