IA Scholar Query: Deterministic Selection in O(log log N) Parallel Time
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgSat, 31 Dec 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440An Algorithmic Study of Fully Dynamic Independent Sets for Map Labeling
https://scholar.archive.org/work/by4kwstrpzgk3fvpxqnu3yoeiq
Map labeling is a classical problem in cartography and geographic information systems that asks to place labels for area, line, and point features, with the goal to select and place the maximum number of independent (i.e., overlap-free) labels. A practically interesting case is point labeling with axis-parallel rectangular labels of common size. In a fully dynamic setting, at each timestep, either a new label appears or an existing label disappears. Then, the challenge is to maintain a maximum cardinality subset of pairwise independent labels with sublinear update time. Motivated by this, we study the maximal independent set ( MIS ) and maximum independent set ( Max-IS ) problems on fully dynamic (insertion/deletion model) sets of axis-parallel rectangles of two types: (i) uniform height and width and (ii) uniform height and arbitrary width; both settings can be modeled as rectangle intersection graphs. We present the first deterministic algorithm for maintaining an MIS (and thus a 4-approximate Max-IS ) of a dynamic set of uniform rectangles with polylogarithmic update time. This breaks the natural barrier of \( \Omega (\Delta) \) update time (where \( \Delta \) is the maximum degree in the graph) for vertex updates presented by Assadi et al. (STOC 2018). We continue by investigating Max-IS and provide a series of deterministic dynamic approximation schemes. For uniform rectangles, we first give an algorithm that maintains a 4-approximate Max-IS with \( O(1) \) update time. In a subsequent algorithm, we establish the trade-off between approximation quality \( 2(1+\frac{1}{k}) \) and update time \( O(k^2\log n) \) , for \( k\in \mathbb {N} \) . We conclude with an algorithm that maintains a 2-approximate Max-IS for dynamic sets of unit-height and arbitrary-width rectangles with \( O(\log ^2 n + \omega \log n) \) update time, where \( \omega \) is the maximum size of an independent set of rectangles stabbed by any horizontal line. We implement our algorithms and report the results of an experimental comparison exploring the trade-off between solution quality and update time for synthetic and real-world map labeling instances. We made several major observations in our empirical study. First, the original approximations are well above their respective worst-case ratios. Second, in comparison with the static approaches, the dynamic approaches show a significant speedup in practice. Third, the approximation algorithms show their predicted relative behavior. The better the solution quality, the worse the update times. Fourth, a simple greedy augmentation to the approximate solutions of the algorithms boost the solution sizes significantly in practice.Sujoy Bhore, Guangping Li, Martin Nöllenburgwork_by4kwstrpzgk3fvpxqnu3yoeiqSat, 31 Dec 2022 00:00:00 GMTNew Pseudo-Random Key Generator for IoT-security Model Based on a Novel 3D Coupled Map Lattice
https://scholar.archive.org/work/oshl3zk3p5elldchlh7zuhdmia
In cryptography fields, one concern is the consideration of the capacity of the resource, especially, in environments that run IoT devices. This paper presents a lightweight and comprehensive security system, which involves a novel Pseudo-Random Number Generator (PRNG) based on a novel 3D Coupled Map Lattice system (3D-CML) as a chaotic system that is more practical in cryptography, and encryption algorithms depend on two security levels, firstly, permuting the plain image using a standard 2D Henon chaotic map, then, encrypt it by a One-Time Symmetric Key (OTSK). The bifurcation diagram of the 3D-CML shows that the chaotic parameters have been extended in their ranges, this feature has a positive effect on the key-space size which its size equal to 2^373. From another aspect, with fewer iteration configurations, all random sequences that are produced by the proposed PRNG have passed all statistical tests of the NIST suite, in turn, this configuration would lower the run-time, hence decreasing the computational effort as a response to the requirements of limited-resources environments, such as IoT. Several assessment metrics, such as Mutual Information, Gray Difference Degree, Histogram, Chi-square, Correlation Coefficient, Entropy, and more, confirmed that the proposed algorithms can be robust in dissolving the internal characteristics in the original image for producing an encryption image, resulting in strong resistance against any cyberattacks.work_oshl3zk3p5elldchlh7zuhdmiaMon, 31 Oct 2022 00:00:00 GMTPerformance analysis of compressive sensing recovery algorithms for image processing using block processing
https://scholar.archive.org/work/ebsnncumbjdq7o3kmdgueqe7f4
<p>The modern digital world comprises of transmitting media files like image, audio, and video which leads to usage of large memory storage, high data transmission rate, and a lot of sensory devices. Compressive sensing (CS) is a sampling theory that compresses the signal at the time of acquiring it. Compressive sensing samples the signal efficiently below the Nyquist rate to minimize storage and recoveries back the signal significantly minimizing the data rate and few sensors. The proposed paper proceeds with three phases. The first phase describes various measurement matrices like Gaussian matrix, circulant matrix, and special random matrices which are the basic foundation of compressive sensing technique that finds its application in various fields like wireless sensors networks (WSN), internet of things (IoT), video processing, biomedical applications, and many. Finally, the paper analyses the performance of the various reconstruction algorithms of compressive sensing like basis pursuit (BP), compressive sampling matching pursuit (CoSaMP), iteratively reweighted least square (IRLS), iterative hard thresholding (IHT), block processing-based basis pursuit (BP-BP) based onmean square error (MSE), and peak signal to noise ratio (PSNR) and then concludes with future works.</p>Mathiyalakendran Aarthi Elaveini, Deepa Thangavelwork_ebsnncumbjdq7o3kmdgueqe7f4Sat, 01 Oct 2022 00:00:00 GMTOn Finding Rank Regret Representatives
https://scholar.archive.org/work/r4npneviqrf5nludmsukxruju4
Selecting the best items in a dataset is a common task in data exploration. However, the concept of "best" lies in the eyes of the beholder: Different users may consider different attributes more important and, hence, arrive at different rankings. Nevertheless, one can remove "dominated" items and create a "representative" subset of the data, comprising the "best items" in it. A Pareto-optimal representative is guaranteed to contain the best item of each possible ranking, but it can be a large portion of data. A much smaller representative can be found if we relax the requirement of including the best item for each user and instead just limit the users' "regret." Existing work defines regret as the loss in score by limiting consideration to the representative instead of the full dataset, for any chosen ranking function. However, the score is often not a meaningful number, and users may not understand its absolute value. Sometimes small ranges in score can include large fractions of the dataset. In contrast, users do understand the notion of rank ordering. Therefore, we consider items' positions in the ranked list in defining the regret and propose the rank-regret representative as the minimal subset of the data containing at least one of the top- k of any possible ranking function. This problem is polynomial time solvable in two-dimensional space but is NP-hard on three or more dimensions. We design a suite of algorithms to fulfill different purposes, such as whether relaxation is permitted on k , the result size, or both, whether a distribution is known, whether theoretical guarantees or practical efficiency is important, and so on. Experiments on real datasets demonstrate that we can efficiently find small subsets with small rank-regrets.Abolfazl Asudeh, Gautam Das, H. V. Jagadish, Shangqi Lu, Azade Nazi, Yufei Tao, Nan Zhang, Jianwen Zhaowork_r4npneviqrf5nludmsukxruju4Fri, 30 Sep 2022 00:00:00 GMTOvercoming Exploration: Deep Reinforcement Learning in Complex Environments from Temporal Logic Specifications
https://scholar.archive.org/work/kkl42dnmqzbp7kefui42tds3oi
Exploration is a fundamental challenge in Deep Reinforcement Learning (DRL) based model-free navigation control since typical exploration techniques for target-driven navigation tasks rely on noise or greedy policies, which are sensitive to the density of rewards. In practice, robots are always deployed in complex cluttered environments, containing dense obstacles and narrow passageways, raising natural spare rewards that are hard to be explored for training. Such a problem becomes even more serious when pre-defined tasks are complex and have rich expressivity. In this paper, we focus on these two aspects and present a deep policy gradient algorithm for a task-guided robot with unknown dynamic systems deployed in a complex cluttered environment. Linear Temporal Logic (LTL) is applied to express a rich robotic specification. To overcome the environmental challenge of exploration during training, we propose a novel path planning-guided reward scheme that is dense over the state space, and crucially, robust to the infeasibility of computed geometric paths due to the black-box dynamics. To facilitate LTL satisfaction, our approach decomposes the LTL mission into sub-tasks that are solved using distributed DRL, where the sub-tasks can be trained in parallel, using Deep Policy Gradient algorithms. Our framework is shown to significantly improve performance (effectiveness, efficiency) and exploration of robots tasked with complex missions in large-scale complex environments. The Video demo can be found on YouTube Channel: https://youtu.be/YQRQ2-yMtIk.Mingyu Cai, Erfan Aasi, Calin Belta, Cristian-Ioan Vasilework_kkl42dnmqzbp7kefui42tds3oiFri, 16 Sep 2022 00:00:00 GMTMissing Data Imputation and Acquisition with Deep Hierarchical Models and Hamiltonian Monte Carlo
https://scholar.archive.org/work/cnpovqj4grcvzehebhsz6ta2ba
Variational Autoencoders (VAEs) have recently been highly successful at imputing and acquiring heterogeneous missing data. However, within this specific application domain, existing VAE methods are restricted by using only one layer of latent variables and strictly Gaussian posterior approximations. To address these limitations, we present HH-VAEM, a Hierarchical VAE model for mixed-type incomplete data that uses Hamiltonian Monte Carlo with automatic hyper-parameter tuning for improved approximate inference. Our experiments show that HH-VAEM outperforms existing baselines in the tasks of missing data imputation and supervised learning with missing features. Finally, we also present a sampling-based approach for efficiently computing the information gain when missing features are to be acquired with HH-VAEM. Our experiments show that this sampling-based approach is superior to alternatives based on Gaussian approximations.Ignacio Peis, Chao Ma, José Miguel Hernández-Lobatowork_cnpovqj4grcvzehebhsz6ta2baFri, 16 Sep 2022 00:00:00 GMTMandator and Sporades: Robust Wide-Area Consensus with Efficient Request Dissemination
https://scholar.archive.org/work/mujhf735hzghvdpusglr77npti
Consensus algorithms are deployed in the wide area to achieve high availability for geographically replicated applications. Wide-area consensus is challenging due to two main reasons: (1) low throughput due to the high latency overhead of client request dissemination and (2) network asynchrony that causes consensus protocols to lose liveness. In this paper, we propose Mandator and Sporades, a modular state machine replication algorithm that enables high performance and resiliency in the wide-area setting. To address the high client request dissemination overhead challenge, we propose Mandator, a novel consensus-agnostic asynchronous dissemination layer. Mandator separates client request dissemination from the critical path of consensus to obtain high performance. Composing Mandator with Multi-Paxos (Mandator-Paxos) delivers significantly high throughput under synchronous networks. However, under asynchronous network conditions, Mandator-Paxos loses liveness which results in high latency. To achieve low latency and robustness under asynchrony, we propose Sporades, a novel omission fault-tolerant consensus algorithm. Sporades consists of two modes of operations -- synchronous and asynchronous -- that always ensure liveness. The combination of Mandator and Sporades (Mandator-Sporades) provides a robust and high-performing state machine replication system. We implement and evaluate Mandator-Sporades in a wide-area deployment running on Amazon EC2. Our evaluation shows that in the synchronous execution, Mandator-Sporades achieves 300k tx/sec throughput in less than 900ms latency, outperforming Multi-Paxos, EPaxos and Rabia by 650\% in throughput, at a modest expense of latency. Furthermore, we show that Mandator-Sporades outperforms Mandator-Paxos, Multi-Paxos, and EPaxos in the face of targeted distributed denial-of-service attacks.Pasindu Tennage, Antoine Desjardins, Eleftherios Kokoris Kogiaswork_mujhf735hzghvdpusglr77nptiFri, 16 Sep 2022 00:00:00 GMT(1+ε)-Approximate Shortest Paths in Dynamic Streams
https://scholar.archive.org/work/6b4zcdf6lnga3eh6esvhhknx5u
Computing approximate shortest paths in the dynamic streaming setting is a fundamental challenge that has been intensively studied. Currently existing solutions for this problem either build a sparse multiplicative spanner of the input graph and compute shortest paths in the spanner offline, or compute an exact single source BFS tree. Solutions of the first type are doomed to incur a stretch-space tradeoff of 2κ-1 versus n^{1+1/κ}, for an integer parameter κ. (In fact, existing solutions also incur an extra factor of 1+ε in the stretch for weighted graphs, and an additional factor of log^{O(1)}n in the space.) The only existing solution of the second type uses n^{1/2 - O(1/κ)} passes over the stream (for space O(n^{1+1/κ})), and applies only to unweighted graphs. In this paper we show that (1+ε)-approximate single-source shortest paths can be computed with Õ(n^{1+1/κ}) space using just constantly many passes in unweighted graphs, and polylogarithmically many passes in weighted graphs. Moreover, the same result applies for multi-source shortest paths, as long as the number of sources is O(n^{1/κ}). We achieve these results by devising efficient dynamic streaming constructions of (1 + ε, β)-spanners and hopsets. On our way to these results, we also devise a new dynamic streaming algorithm for the 1-sparse recovery problem. Even though our algorithm for this task is slightly inferior to the existing algorithms of [S. Ganguly, 2007; Graham Cormode and D. Firmani, 2013], we believe that it is of independent interest.Michael Elkin, Chhaya Trehan, Amit Chakrabarti, Chaitanya Swamywork_6b4zcdf6lnga3eh6esvhhknx5uThu, 15 Sep 2022 00:00:00 GMTThe Fragility of Optimized Bandit Algorithms
https://scholar.archive.org/work/mpuqxymwezahxfllgl4fqnj46a
Much of the literature on optimal design of bandit algorithms is based on minimization of expected regret. It is well known that designs that are optimal over certain exponential families can achieve expected regret that grows logarithmically in the number of arm plays, at a rate governed by the Lai-Robbins lower bound. In this paper, we show that when one uses such optimized designs, the regret distribution of the associated algorithms necessarily has a very heavy tail, specifically, that of a truncated Cauchy distribution. Furthermore, for p>1, the p'th moment of the regret distribution grows much faster than poly-logarithmically, in particular as a power of the total number of arm plays. We show that optimized UCB bandit designs are also fragile in an additional sense, namely when the problem is even slightly mis-specified, the regret can grow much faster than the conventional theory suggests. Our arguments are based on standard change-of-measure ideas, and indicate that the most likely way that regret becomes larger than expected is when the optimal arm returns below-average rewards in the first few arm plays, thereby causing the algorithm to believe that the arm is sub-optimal. To alleviate the fragility issues exposed, we show that UCB algorithms can be modified so as to ensure a desired degree of robustness to mis-specification. In doing so, we also provide a sharp trade-off between the amount of UCB exploration and the tail exponent of the resulting regret distribution.Lin Fan, Peter W. Glynnwork_mpuqxymwezahxfllgl4fqnj46aThu, 15 Sep 2022 00:00:00 GMTSystematically and efficiently improving existing k-means initialization algorithms by pairwise-nearest-neighbor smoothing
https://scholar.archive.org/work/oz3mzrlydrh25dptysfwbt2vdq
We present a meta-method for initializing (seeding) the k-means clustering algorithm called PNN-smoothing. It consists in splitting a given dataset into J random subsets, clustering each of them individually, and merging the resulting clusterings with the pairwise-nearest-neighbor (PNN) method. It is a meta-method in the sense that when clustering the individual subsets any seeding algorithm can be used. If the computational complexity of that seeding algorithm is linear in the size of the data N and the number of clusters k, PNN-smoothing is also almost linear with an appropriate choice of J, and quite competitive in practice. We show empirically, using several existing seeding methods and testing on several synthetic and real datasets, that this procedure results in systematically better costs. Our implementation is publicly available at https://github.com/carlobaldassi/KMeansPNNSmoothing.jl.Carlo Baldassiwork_oz3mzrlydrh25dptysfwbt2vdqThu, 15 Sep 2022 00:00:00 GMTTower: Data Structures in Quantum Superposition
https://scholar.archive.org/work/i7asozqilnh25nqq3tr252tela
Emerging quantum algorithms for problems such as element distinctness, subset sum, and closest pair demonstrate computational advantages by relying on abstract data structures. Practically realizing such an algorithm as a program for a quantum computer requires an efficient implementation of the data structure whose operations correspond to unitary operators that manipulate quantum superpositions of data. To correctly operate in superposition, an implementation must satisfy three properties -- reversibility, history independence, and bounded-time execution. Standard implementations, such as the representation of an abstract set as a hash table, fail these properties, calling for tools to develop specialized implementations. In this work, we present Core Tower, the first language for quantum programming with random-access memory. Core Tower enables the developer to implement data structures as pointer-based, linked data. It features a reversible semantics enabling every valid program to be translated to a unitary quantum circuit. We present Boson, the first memory allocator that supports reversible, history-independent, and constant-time dynamic memory allocation in quantum superposition. We also present Tower, a language for quantum programming with recursively defined data structures. Tower features a type system that bounds all recursion using classical parameters as is necessary for a program to execute on a quantum computer. Using Tower, we implement Ground, the first quantum library of data structures, including lists, stacks, queues, strings, and sets. We provide the first executable implementation of sets that satisfies all three mandated properties of reversibility, history independence, and bounded-time execution.Charles Yuan, Michael Carbinwork_i7asozqilnh25nqq3tr252telaThu, 15 Sep 2022 00:00:00 GMTDiscrepancy-Based Active Learning for Domain Adaptation
https://scholar.archive.org/work/zzntam4zqjg7jdqzprkapxcjhq
The goal of the paper is to design active learning strategies which lead to domain adaptation under an assumption of Lipschitz functions. Building on previous work by Mansour et al. (2009) we adapt the concept of discrepancy distance between source and target distributions to restrict the maximization over the hypothesis class to a localized class of functions which are performing accurate labeling on the source domain. We derive generalization error bounds for such active learning strategies in terms of Rademacher average and localized discrepancy for general loss functions which satisfy a regularity condition. A practical K-medoids algorithm that can address the case of large data set is inferred from the theoretical bounds. Our numerical experiments show that the proposed algorithm is competitive against other state-of-the-art active learning techniques in the context of domain adaptation, in particular on large data sets of around one hundred thousand images.Antoine de Mathelin, Francois Deheeger, Mathilde Mougeot, Nicolas Vayatiswork_zzntam4zqjg7jdqzprkapxcjhqWed, 14 Sep 2022 00:00:00 GMTModelling Human Emotion Dynamics from Social Media Footprints with Artificial Intelligence and Natural Language Processing
https://scholar.archive.org/work/srtm2mcjvvga3lrnq6sv7ljis4
A thesis submitted in total fulfilment of the requirements for the degree of Doctor of Philosophy to the Research Centre for Data Analytics and Cognition, La Trobe Business School, La Trobe University, Victoria, Australia.Achini Adikariwork_srtm2mcjvvga3lrnq6sv7ljis4Wed, 14 Sep 2022 00:00:00 GMTA Review and Roadmap of Deep Learning Causal Discovery in Different Variable Paradigms
https://scholar.archive.org/work/2wzo2fn5zncqncsi2h6cuo2ydq
Understanding causality helps to structure interventions to achieve specific goals and enables predictions under interventions. With the growing importance of learning causal relationships, causal discovery tasks have transitioned from using traditional methods to infer potential causal structures from observational data to the field of pattern recognition involved in deep learning. The rapid accumulation of massive data promotes the emergence of causal search methods with brilliant scalability. Existing summaries of causal discovery methods mainly focus on traditional methods based on constraints, scores and FCMs, there is a lack of perfect sorting and elaboration for deep learning-based methods, also lacking some considers and exploration of causal discovery methods from the perspective of variable paradigms. Therefore, we divide the possible causal discovery tasks into three types according to the variable paradigm and give the definitions of the three tasks respectively, define and instantiate the relevant datasets for each task and the final causal model constructed at the same time, then reviews the main existing causal discovery methods for different tasks. Finally, we propose some roadmaps from different perspectives for the current research gaps in the field of causal discovery and point out future research directions.Hang Chen, Keqing Du, Xinyu Yang, Chenguang Liwork_2wzo2fn5zncqncsi2h6cuo2ydqWed, 14 Sep 2022 00:00:00 GMTHigh-Frequency Modelling of Rotating Electrical Machines
https://scholar.archive.org/work/c326t7tbvvdzvoobgwksq3ock4
The objective of the present dissertation is to study the high-frequency response of rotating electrical machines. The study of the physical phenomenom is initially tackled by performing a comprehensive analysis of different modelling methodologies and trends. The main core of the thesis focusses on developing and validating a deterministic modelling method that provides an estimation of the machine impedances in a high-frequency band (i.e., from tenths of kHz to thenths of MHz depending on the objective). This prediction is aimed to favor an EMI-aware design of the rotating machine at early stages when a physical prototype is not yet available. The developed modelling method is implemented and validated for two different PMSMs utilized in the automotive industry. These two machines are selected taking into account the INTERACT project boundaries and partners. The work presented in this dissertation gave rise to some original contributions shared in peer reviewed international conferences and journals. These research documents are already published or in process of publication. Further information about published material is provided in the section 'List of publications'. The most important contributions developed in this dissertation are: • The development and validation of a high-frequency impedance model oriented towards early prediction for PMSMs • The detailed experimental investigation of the effect of PMSM rotor and housing on the high-frequency impedance response • The detailed experimental investigation of the rotor DC field effect on the high-frequency impedance response in PMSMs • A thorough numerical investigation of the effect of the machine configuration, geometry and material characteristics on the high-frequency impedance response.Jose Enrique Ruiz Sarriowork_c326t7tbvvdzvoobgwksq3ock4Tue, 13 Sep 2022 00:00:00 GMTFast Stabiliser Simulation with Quadratic Form Expansions
https://scholar.archive.org/work/biqrwt6s2zf3xome2a2l3diwoq
This paper builds on the idea of simulating stabiliser circuits through transformations of quadratic form expansions. This is a representation of a quantum state which specifies a formula for the expansion in the standard basis, describing real and imaginary relative phases using a degree-2 polynomial over the integers. We show how, with deft management of the quadratic form expansion representation, we may simulate individual stabiliser operations in O(n^2) time matching the overall complexity of other simulation techniques [arXiv:quant-ph/0406196, arXiv:quant-ph/0504117, arXiv:1808.00128]. Our techniques provide economies of scale in the time to simulate simultaneous measurements of all (or nearly all) qubits in the standard basis. Our techniques also allow single-qubit measurements with deterministic outcomes to be simulated in constant time. We also describe throughout how these bounds may be tightened when the expansion of the state in the standard basis has relatively few terms (has low 'rank'), or can be specified by sparse matrices. Specifically, this allows us to simulate a 'local' stabiliser syndrome measurement in time O(n), for a stabiliser code subject to Pauli noise – matching what is possible using techniques developed by Gidney [arXiv:2103.02202] without the need to store which operations have thus far been simulated.Niel de Beaudrap, Steven Herbertwork_biqrwt6s2zf3xome2a2l3diwoqTue, 13 Sep 2022 00:00:00 GMTAsk Before You Act: Generalising to Novel Environments by Asking Questions
https://scholar.archive.org/work/s6w6w6mwpbd23na6cyc7d6qvie
Solving temporally-extended tasks is a challenge for most reinforcement learning (RL) algorithms [arXiv:1906.07343]. We investigate the ability of an RL agent to learn to ask natural language questions as a tool to understand its environment and achieve greater generalisation performance in novel, temporally-extended environments. We do this by endowing this agent with the ability of asking "yes-no" questions to an all-knowing Oracle. This allows the agent to obtain guidance regarding the task at hand, while limiting the access to new information. To study the emergence of such natural language questions in the context of temporally-extended tasks we first train our agent in a Mini-Grid environment. We then transfer the trained agent to a different, harder environment. We observe a significant increase in generalisation performance compared to a baseline agent unable to ask questions. Through grounding its understanding of natural language in its environment, the agent can reason about the dynamics of its environment to the point that it can ask new, relevant questions when deployed in a novel environment.Ross Murphy, Sergey Mosesov, Javier Leguina Peral, Thymo ter Doestwork_s6w6w6mwpbd23na6cyc7d6qvieTue, 13 Sep 2022 00:00:00 GMTA quantum parallel Markov chain Monte Carlo
https://scholar.archive.org/work/zuu66sjkjvcpld3ngslz5tf35i
We propose a novel quantum computing strategy for parallel MCMC algorithms that generate multiple proposals at each step. This strategy makes parallel MCMC amenable to quantum parallelization by using the Gumbel-max trick to turn the generalized accept-reject step into a discrete optimization problem. When combined with new insights from the parallel MCMC literature, such an approach allows us to embed target density evaluations within a well-known extension of Grover's quantum search algorithm. Letting P denote the number of proposals in a single MCMC iteration, the combined strategy reduces the number of target evaluations required from 𝒪(P) to 𝒪(P^1/2). In the following, we review the rudiments of quantum computing, quantum search and the Gumbel-max trick in order to elucidate their combination for as wide a readership as possible.Andrew J. Holbrookwork_zuu66sjkjvcpld3ngslz5tf35iTue, 13 Sep 2022 00:00:00 GMTOn bounded depth proofs for Tseitin formulas on the grid; revisited
https://scholar.archive.org/work/ccbsdr6d7vditpzhilez6ehhnu
We study Frege proofs using depth-d Boolean formulas for the Tseitin contradiction on n × n grids. We prove that if each line in the proof is of size M then the number of lines is exponential in n/(log M)^O(d). This strengthens a recent result of Pitassi et al. [PRT22]. The key technical step is a multi-switching lemma extending the switching lemma of Håstad [Hås20] for a space of restrictions related to the Tseitin contradiction. The strengthened lemma also allows us to improve the lower bound for standard proof size of bounded depth Frege refutations from exponential in Ω̃(n^1/59d) to exponential in Ω̃(n^1/(2d-1)).Johan Håstad, Kilian Rissework_ccbsdr6d7vditpzhilez6ehhnuTue, 13 Sep 2022 00:00:00 GMTUniversal Online Convex Optimization with Minimax Optimal Second-Order Dynamic Regret
https://scholar.archive.org/work/xbpnupsukfccxla2vqx2zwn5cq
We introduce an online convex optimization algorithm which utilizes projected subgradient descent with optimal adaptive learning rates. Our method provides second-order minimax-optimal dynamic regret guarantee (i.e. dependent on the sum of squared subgradient norms) for a sequence of general convex functions, which may not have strong convexity, smoothness, exp-concavity or even Lipschitz-continuity. The regret guarantee is against any comparator decision sequence with bounded path variation (i.e. sum of the distances between successive decisions). We generate the lower bound of the worst-case second-order dynamic regret by incorporating actual subgradient norms. We show that this lower bound matches with our regret guarantee within a constant factor, which makes our algorithm minimax optimal. We also derive the extension for learning in each decision coordinate individually. We demonstrate how to best preserve our regret guarantee in a truly online manner, when the bound on path variation of the comparator sequence grows in time or the feedback regarding such bound arrives partially as time goes on. We further build on our algorithm to eliminate the need of any knowledge on the comparator path variation, and provide minimax optimal second-order regret guarantees with no a priori information. Our approach can compete against all comparator sequences simultaneously (universally) in a minimax optimal manner, i.e. each regret guarantee depends on the respective comparator path variation. We discuss modifications to our approach which address complexity reductions for time, computation and memory. We further improve our results by making the regret guarantees also dependent on comparator sets' diameters in addition to the respective path variations.Hakan Gokcesu, Suleyman S. Kozatwork_xbpnupsukfccxla2vqx2zwn5cqTue, 13 Sep 2022 00:00:00 GMT