IA Scholar Query: An Approximation Algorithm for MINIMUM CONVEX COVER with Logarithmic Performance Guarantee.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgSat, 31 Dec 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440A Survey on Concept Drift in Process Mining
https://scholar.archive.org/work/hvmkupdorzf5df4tts42gzykjm
Concept drift in process mining (PM) is a challenge as classical methods assume processes are in a steady-state, i.e., events share the same process version. We conducted a systematic literature review on the intersection of these areas, and thus, we review concept drift in PM and bring forward a taxonomy of existing techniques for drift detection and online PM for evolving environments. Existing works depict that (i) PM still primarily focuses on offline analysis, and (ii) the assessment of concept drift techniques in processes is cumbersome due to the lack of common evaluation protocol, datasets, and metrics.Denise Maria Vecino Sato, Sheila Cristiana De Freitas, Jean Paul Barddal, Edson Emilio Scalabrinwork_hvmkupdorzf5df4tts42gzykjmSat, 31 Dec 2022 00:00:00 GMTDealing with Unknown Variances in Best-Arm Identification
https://scholar.archive.org/work/fnrzt5e2lvaflon3szbx7srn5y
The problem of identifying the best arm among a collection of items having Gaussian rewards distribution is well understood when the variances are known. Despite its practical relevance for many applications, few works studied it for unknown variances. In this paper we introduce and analyze two approaches to deal with unknown variances, either by plugging in the empirical variance or by adapting the transportation costs. In order to calibrate our two stopping rules, we derive new time-uniform concentration inequalities, which are of independent interest. Then, we illustrate the theoretical and empirical performances of our two sampling rule wrappers on Track-and-Stop and on a Top Two algorithm. Moreover, by quantifying the impact on the sample complexity of not knowing the variances, we reveal that it is rather small.Marc Jourdan, Rémy Degenne, Emilie Kaufmannwork_fnrzt5e2lvaflon3szbx7srn5yMon, 03 Oct 2022 00:00:00 GMTOn the minimax rate of the Gaussian sequence model under bounded convex constraints
https://scholar.archive.org/work/wg2nlyfyhbdpnhmt3vh27hutyu
We determine the exact minimax rate of a Gaussian sequence model under bounded convex constraints, purely in terms of the local geometry of the given constraint set K. Our main result shows that the minimax risk (up to constant factors) under the squared ℓ_2 loss is given by ϵ^*2∧diam(K)^2 with ϵ^* = sup{ϵ : ϵ^2/σ^2≤log M^loc(ϵ)}, where log M^loc(ϵ) denotes the local entropy of the set K, and σ^2 is the variance of the noise. We utilize our abstract result to re-derive known minimax rates for some special sets K such as hyperrectangles, ellipses, and more generally quadratically convex orthosymmetric sets. Finally, we extend our results to the unbounded case with known σ^2 to show that the minimax rate in that case is ϵ^*2.Matey Neykovwork_wg2nlyfyhbdpnhmt3vh27hutyuMon, 03 Oct 2022 00:00:00 GMTDimension Reduction in Contextual Online Learning via Nonparametric Variable Selection
https://scholar.archive.org/work/t3cbqs7eyzgedjxqndtx3f7dsy
We consider a contextual online learning (multi-armed bandit) problem with high-dimensional covariate 𝐱 and decision 𝐲. The reward function to learn, f(𝐱,𝐲), does not have a particular parametric form. The literature has shown that the optimal regret is Õ(T^(d_x+d_y+1)/(d_x+d_y+2)), where d_x and d_y are the dimensions of 𝐱 and 𝐲, and thus it suffers from the curse of dimensionality. In many applications, only a small subset of variables in the covariate affect the value of f, which is referred to as sparsity in statistics. To take advantage of the sparsity structure of the covariate, we propose a variable selection algorithm called BV-LASSO, which incorporates novel ideas such as binning and voting to apply LASSO to nonparametric settings. Our algorithm achieves the regret Õ(T^(d_x^*+d_y+1)/(d_x^*+d_y+2)), where d_x^* is the effective covariate dimension. The regret matches the optimal regret when the covariate is d^*_x-dimensional and thus cannot be improved. Our algorithm may serve as a general recipe to achieve dimension reduction via variable selection in nonparametric settings.Wenhao Li, Ningyuan Chen, L. Jeff Hongwork_t3cbqs7eyzgedjxqndtx3f7dsyMon, 03 Oct 2022 00:00:00 GMTBeyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications
https://scholar.archive.org/work/j6lwyb7mcbf5rfi45frto54mw4
Communication systems to date primarily aim at reliably communicating bit sequences. Such an approach provides efficient engineering designs that are agnostic to the meanings of the messages or to the goal that the message exchange aims to achieve. Next generation systems, however, can be potentially enriched by folding message semantics and goals of communication into their design. Further, these systems can be made cognizant of the context in which communication exchange takes place, providing avenues for novel design insights. This tutorial summarizes the efforts to date, starting from its early adaptations, semantic-aware and task-oriented communications, covering the foundations, algorithms and potential implementations. The focus is on approaches that utilize information theory to provide the foundations, as well as the significant role of learning in semantics and task-aware communications.Deniz Gunduz, Zhijin Qin, Inaki Estella Aguerri, Harpreet S. Dhillon, Zhaohui Yang, Aylin Yener, Kai Kit Wong, Chan-Byoung Chaework_j6lwyb7mcbf5rfi45frto54mw4Mon, 03 Oct 2022 00:00:00 GMTConvergence of score-based generative modeling for general data distributions
https://scholar.archive.org/work/4dscc3ehkvfqribm3r4lbud4iq
Score-based generative modeling (SGM) has grown to be a hugely successful method for learning to generate samples from complex data distributions such as that of images and audio. It is based on evolving an SDE that transforms white noise into a sample from the learned distribution, using estimates of the score function, or gradient log-pdf. Previous convergence analyses for these methods have suffered either from strong assumptions on the data distribution or exponential dependencies, and hence fail to give efficient guarantees for the multimodal and non-smooth distributions that arise in practice and for which good empirical performance is observed. We consider a popular kind of SGM – denoising diffusion models – and give polynomial convergence guarantees for general data distributions, with no assumptions related to functional inequalities or smoothness. Assuming L^2-accurate score estimates, we obtain Wasserstein distance guarantees for any distribution of bounded support or sufficiently decaying tails, as well as TV guarantees for distributions with further smoothness assumptions.Holden Lee, Jianfeng Lu, Yixin Tanwork_4dscc3ehkvfqribm3r4lbud4iqMon, 03 Oct 2022 00:00:00 GMTMinimax Mixing Time of the Metropolis-Adjusted Langevin Algorithm for Log-Concave Sampling
https://scholar.archive.org/work/laqfzeola5gevj4f5c4addqwtq
We study the mixing time of the Metropolis-adjusted Langevin algorithm (MALA) for sampling from a log-smooth and strongly log-concave distribution. We establish its optimal minimax mixing time under a warm start. Our main contribution is two-fold. First, for a d-dimensional log-concave density with condition number κ, we show that MALA with a warm start mixes in Õ(κ√(d)) iterations up to logarithmic factors. This improves upon the previous work on the dependency of either the condition number κ or the dimension d. Our proof relies on comparing the leapfrog integrator with the continuous Hamiltonian dynamics, where we establish a new concentration bound for the acceptance rate. Second, we prove a spectral gap based mixing time lower bound for reversible MCMC algorithms on general state spaces. We apply this lower bound result to construct a hard distribution for which MALA requires at least Ω̃(κ√(d)) steps to mix. The lower bound for MALA matches our upper bound in terms of condition number and dimension. Finally, numerical experiments are included to validate our theoretical results.Keru Wu, Scott Schmidler, Yuansi Chenwork_laqfzeola5gevj4f5c4addqwtqSun, 02 Oct 2022 00:00:00 GMTMultiple Access Channel in Massive Multi-User MIMO Using Group Testing
https://scholar.archive.org/work/4y7asf55k5cjfjl5a6s25fjnqe
The number of wireless devices (e.g., cellular phones, IoT, laptops) connected to Wireless Local Area Networks (WLAN) grows each year exponentially. The orchestration of the connected devices becomes infeasible, especially when the number of resources available at the single access point (e.g., Base Station, Wireless Access Points) is limited. On the other hand, the number of antennas at each device grows too. We leverage the large number of antennas to suggest a massive multiple-user multiple-input-multiple-output (MU-MIMO) scheme using sparse coding based on Group Testing (GT) principles, which reduces overhead and complexity. We show that it is possible to jointly identify and decode up to K messages simultaneously out of N· C messages (where N is the number of users and C is the number of messages per user) without any scheduling overhead or prior knowledge of the identity of the transmitting devices. Our scheme is order-optimal in the number of users and messages, utilizing minimal knowledge of channel state and an efficient (in both run-time and space) decoding algorithm requiring O(Klog NC) antennas. We derive sufficient conditions for vanishing error probability and bound the minimal number of antennas necessary for our scheme.George Vershinin and Asaf Cohen and Omer Gurewitzwork_4y7asf55k5cjfjl5a6s25fjnqeSun, 02 Oct 2022 00:00:00 GMTContextual Bandits with Knapsacks for a Conversion Model
https://scholar.archive.org/work/uaglzo75wfdnpdjvkwp24b4u4y
We consider contextual bandits with knapsacks, with an underlying structure between rewards generated and cost vectors suffered. We do so motivated by sales with commercial discounts. At each round, given the stochastic i.i.d.context 𝐱_t and the arm picked a_t (corresponding, e.g., to a discount level), a customer conversion may be obtained, in which case a reward r(a,𝐱_t) is gained and vector costs c(a_t,𝐱_t) are suffered (corresponding, e.g., to losses of earnings). Otherwise, in the absence of a conversion, the reward and costs are null. The reward and costs achieved are thus coupled through the binary variable measuring conversion or the absence thereof. This underlying structure between rewards and costs is different from the linear structures considered by Agrawal and Devanur [2016] (but we show that the techniques introduced in the present article may also be applied to the case of these linear structures). The adaptive policies exhibited solve at each round a linear program based on upper-confidence estimates of the probabilities of conversion given a and 𝐱. This kind of policy is most natural and achieves a regret bound of the typical order (OPT/B) √(T), where B is the total budget allowed, OPT is the optimal expected reward achievable by a static policy, and T is the number of rounds.Zhen Li, Gilles Stoltzwork_uaglzo75wfdnpdjvkwp24b4u4yFri, 30 Sep 2022 00:00:00 GMTEF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression
https://scholar.archive.org/work/pupoimkfebbbffmldskpp4dy4y
The starting point of this paper is the discovery of a novel and simple error-feedback mechanism, which we call EF21-P, for dealing with the error introduced by a contractive compressor. Unlike all prior works on error feedback, where compression and correction operate in the dual space of gradients, our mechanism operates in the primal space of models. While we believe that EF21-P may be of interest in many situations where it is often advantageous to perform model perturbation prior to the computation of the gradient (e.g., randomized smoothing and generalization), in this work we focus our attention on its use as a key building block in the design of communication-efficient distributed optimization methods supporting bidirectional compression. In particular, we employ EF21-P as the mechanism for compressing and subsequently error-correcting the model broadcast by the server to the workers. By combining EF21-P with suitable methods performing worker-to-server compression, we obtain novel methods supporting bidirectional compression and enjoying new state-of-the-art theoretical communication complexity for convex and nonconvex problems. For example, our bounds are the first that manage to decouple the variance/error coming from the workers-to-server and server-to-workers compression, transforming a multiplicative dependence to an additive one. In the convex regime, we obtain the first bounds that match the theoretical communication complexity of gradient descent. Even in this convex regime, our algorithms work with biased gradient estimators, which is non-standard and requires new proof techniques that may be of independent interest. Finally, our theoretical results are corroborated through suitable experiments.Kaja Gruntkowska, Alexander Tyurin, Peter Richtárikwork_pupoimkfebbbffmldskpp4dy4yFri, 30 Sep 2022 00:00:00 GMTGrowth Optimization in Stochastic Portfolio Theory with Applications to Robust Finance and Open Markets
https://scholar.archive.org/work/nfgwybpxznb7hmyaoscctvtvry
Stochastic portfolio theory (SPT) is a financial framework with a large number d of stocks and the goal of modelling equity markets over long time horizons. This thesis concerns the study of growth optimization problems in the context of SPT in robust and constrained settings. In Part I of the thesis we consider the problem of maximizing the asymptotic growth rate of an investor under drift uncertainty. As in the work of Kardaras and Robertson [28] we take as inputs (i) a Markovian volatility matrix c(x) and (ii) an invariant density p(x) for the market weights, but we additionally impose long-only constraints on the investor. Our principal contribution is proving a uniqueness and existence result for the class of concave functionally generated portfolios and developing a finite dimensional approximation, which can be used to numerically find the optimum. In addition to the general results outlined above, we propose the use of a broad class of models for the volatility matrix c(x), which can be calibrated to data and, under which, we obtain explicit formulas of the optimal unconstrained portfolio for any invariant density. In Part II we propose a unified approach to several problems in SPT. Our approach combines open markets, where trading is constrained to the top N capitalized stocks as well as the market portfolio consisting of all d assets, with a parametric family of models which we call hybrid Jacobi processes. We provide a detailed analysis of ergodicity, particle collisions, and boundary attainment, and use these results to study the associated financial markets. Their properties include (1) stability of the capital distribution curve and (2) unleveraged and explicit growth optimal strategies. The sub-class of rank Jacobi models are additionally shown to (3) serve as the worst-case model for a robust asymptotic growth problem under model ambiguity and (4) exhibit stability in the large-d limit. Our definition of an open market is a relaxation of existing definitions which is essential to make the analysis [...]David Itkinwork_nfgwybpxznb7hmyaoscctvtvryFri, 30 Sep 2022 00:00:00 GMTFlexible risk design using bi-directional dispersion
https://scholar.archive.org/work/3tfmykzjwvb3vpzs33u4ehhewi
Many novel notions of "risk" (e.g., CVaR, tilted risk, DRO risk) have been proposed and studied, but these risks are all at least as sensitive as the mean to loss tails on the upside, and tend to ignore deviations on the downside. We study a complementary new risk class that penalizes loss deviations in a bi-directional manner, while having more flexibility in terms of tail sensitivity than is offered by mean-variance. This class lets us derive high-probability learning guarantees without explicit gradient clipping, and empirical tests using both simulated and real data illustrate a high degree of control over key properties of the test loss distribution incurred by gradient-based learners.Matthew J. Hollandwork_3tfmykzjwvb3vpzs33u4ehhewiFri, 30 Sep 2022 00:00:00 GMTNeural Networks Efficiently Learn Low-Dimensional Representations with SGD
https://scholar.archive.org/work/chpq7q7jp5bcjk6uqpzaik7qyq
We study the problem of training a two-layer neural network (NN) of arbitrary width using stochastic gradient descent (SGD) where the input x∈ℝ^d is Gaussian and the target y ∈ℝ follows a multiple-index model, i.e., y=g(⟨u_1,x⟩,...,⟨u_k,x⟩) with a noisy link function g. We prove that the first-layer weights of the NN converge to the k-dimensional principal subspace spanned by the vectors u_1,...,u_k of the true model, when online SGD with weight decay is used for training. This phenomenon has several important consequences when k ≪ d. First, by employing uniform convergence on this smaller subspace, we establish a generalization error bound of 𝒪(√(kd/T)) after T iterations of SGD, which is independent of the width of the NN. We further demonstrate that, SGD-trained ReLU NNs can learn a single-index target of the form y=f(⟨u,x⟩) + ϵ by recovering the principal direction, with a sample complexity linear in d (up to log factors), where f is a monotonic function with at most polynomial growth, and ϵ is the noise. This is in contrast to the known d^Ω(p) sample requirement to learn any degree p polynomial in the kernel regime, and it shows that NNs trained with SGD can outperform the neural tangent kernel at initialization. Finally, we also provide compressibility guarantees for NNs using the approximate low-rank structure produced by SGD.Alireza Mousavi-Hosseini, Sejun Park, Manuela Girotti, Ioannis Mitliagkas, Murat A. Erdogduwork_chpq7q7jp5bcjk6uqpzaik7qyqThu, 29 Sep 2022 00:00:00 GMTOn Quantum Speedups for Nonconvex Optimization via Quantum Tunneling Walks
https://scholar.archive.org/work/cpzrkwavdvckvmpkoq46drldom
Classical algorithms are often not effective for solving nonconvex optimization problems where local minima are separated by high barriers. In this paper, we explore possible quantum speedups for nonconvex optimization by leveraging the global effect of quantum tunneling. Specifically, we introduce a quantum algorithm termed the quantum tunneling walk (QTW) and apply it to nonconvex problems where local minima are approximately global minima. We show that QTW achieves quantum speedup over classical stochastic gradient descents (SGD) when the barriers between different local minima are high but thin and the minima are flat. Based on this observation, we construct a specific double-well landscape, where classical algorithms cannot efficiently hit one target well knowing the other well but QTW can when given proper initial states near the known well. Finally, we corroborate our findings with numerical experiments.Yizhou Liu, Weijie J. Su, Tongyang Liwork_cpzrkwavdvckvmpkoq46drldomThu, 29 Sep 2022 00:00:00 GMTSpatial regionalization based on optimal information compression
https://scholar.archive.org/work/yfadduk7sne7de47pfsskaatiq
Regionalization, spatially contiguous clustering, provides a means to reduce the effect of noise in sampled data and identify homogeneous areas for policy development among many other applications. Existing regionalization methods require user input such as the number of regions or a similarity measure between regions, which does not allow for the extraction of the natural regions defined solely by the data itself. Here we view the problem of regionalization as one of data compression and develop an efficient, parameter-free regionalization algorithm based on the minimum description length principle. We demonstrate that our method is capable of recovering planted spatial clusters in noisy synthetic data, and that it can meaningfully coarse-grain real demographic data. Using our description length formulation, we find that spatial ethnoracial data in U.S. metropolitan areas has become less compressible over the period from 1980 to 2010, reflecting the rising complexity of urban segregation patterns in these metros.Alec Kirkleywork_yfadduk7sne7de47pfsskaatiqThu, 29 Sep 2022 00:00:00 GMTOptimal transport methods for combinatorial optimization over two random point sets
https://scholar.archive.org/work/lzgi6js5gvfatmtszdhxg2myre
We investigate the minimum cost of a wide class of combinatorial optimization problems over random bipartite geometric graphs in ℝ^d where the edge cost between two points is given by a p-th power of their Euclidean distance. This includes e.g. the travelling salesperson problem and the bounded degree minimum spanning tree. We establish in particular almost sure convergence, as n grows, of a suitable renormalization of the random minimum cost, if the points are uniformly distributed and d ≥ 3, 1≤ p<d. Previous results were limited to the range p<d/2. Our proofs are based on subadditivity methods and build upon new bounds for random instances of the Euclidean bipartite matching problem, obtained through its optimal transport relaxation and functional analytic techniques.Michael Goldman, Dario Trevisanwork_lzgi6js5gvfatmtszdhxg2myreThu, 29 Sep 2022 00:00:00 GMTOnline Subset Selection using α-Core with no Augmented Regret
https://scholar.archive.org/work/w5rh7qa3i5d2xho6rgsodrfcgi
We consider the problem of sequential sparse subset selections in an online learning setup. Assume that the set [N] consists of N distinct elements. On the t^th round, a monotone reward function f_t: 2^[N]→ℝ_+, which assigns a non-negative reward to each subset of [N], is revealed to a learner. The learner selects (perhaps randomly) a subset S_t ⊆ [N] of k elements before the reward function f_t for that round is revealed (k ≤ N). As a consequence of its choice, the learner receives a reward of f_t(S_t) on the t^th round. The learner's goal is to design an online subset selection policy to maximize its expected cumulative reward accrued over a given time horizon. In this connection, we propose an online learning policy called SCore (Subset Selection with Core) that solves the problem for a large class of reward functions. The proposed SCore policy is based on a new concept of α-Core, which is a generalization of the notion of Core from the cooperative game theory literature. We establish a learning guarantee for the SCore policy in terms of a new performance metric called α-augmented regret. In this new metric, the power of the offline benchmark is suitably augmented compared to the online policy. We give several illustrative examples to show that a broad class of reward functions, including submodular, can be efficiently learned with the SCore policy. We also outline how the SCore policy can be used under a semi-bandit feedback model and conclude the paper with a number of open problems.Sourav Sahoo, Samrat Mukhopadhyay, Abhishek Sinhawork_w5rh7qa3i5d2xho6rgsodrfcgiThu, 29 Sep 2022 00:00:00 GMTNonconvex Matrix Factorization is Geodesically Convex: Global Landscape Analysis for Fixed-rank Matrix Optimization From a Riemannian Perspective
https://scholar.archive.org/work/m2tum4ngx5fv3ob2fa4laxzblq
We study a general matrix optimization problem with a fixed-rank positive semidefinite (PSD) constraint. We perform the Burer-Monteiro factorization and consider a particular Riemannian quotient geometry in a search space that has a total space equipped with the Euclidean metric. When the original objective f satisfies standard restricted strong convexity and smoothness properties, we characterize the global landscape of the factorized objective under the Riemannian quotient geometry. We show the entire search space can be divided into three regions: (R1) the region near the target parameter of interest, where the factorized objective is geodesically strongly convex and smooth; (R2) the region containing neighborhoods of all strict saddle points; (R3) the remaining regions, where the factorized objective has a large gradient. To our best knowledge, this is the first global landscape analysis of the Burer-Monteiro factorized objective under the Riemannian quotient geometry. Our results provide a fully geometric explanation for the superior performance of vanilla gradient descent under the Burer-Monteiro factorization. When f satisfies a weaker restricted strict convexity property, we show there exists a neighborhood near local minimizers such that the factorized objective is geodesically convex. To prove our results we provide a comprehensive landscape analysis of a matrix factorization problem with a least squares objective, which serves as a critical bridge. Our conclusions are also based on a result of independent interest stating that the geodesic ball centered at Y with a radius 1/3 of the least singular value of Y is a geodesically convex set under the Riemannian quotient geometry, which as a corollary, also implies a quantitative bound of the convexity radius in the Bures-Wasserstein space. The convexity radius obtained is sharp up to constants.Yuetian Luo, Nicolas Garcia Trilloswork_m2tum4ngx5fv3ob2fa4laxzblqThu, 29 Sep 2022 00:00:00 GMTOn the Risk of Cancelable Biometrics
https://scholar.archive.org/work/fy6e35zqonehxhimkd7fmevsbq
Cancelable biometrics (CB) employs an irreversible transformation to convert the biometric features into transformed templates while preserving the relative distance between two templates for security and privacy protection. However, distance preservation invites unexpected security issues such as pre-image attacks, which are often neglected.This paper presents a generalized pre-image attack method and its extension version that operates on practical CB systems. We theoretically reveal that distance preservation property is a vulnerability source in the CB schemes. We then propose an empirical information leakage estimation algorithm to access the pre-image attack risk of the CB schemes. The experiments conducted with six CB schemes designed for the face, iris and fingerprint, demonstrate that the risks originating from the distance computed from two transformed templates significantly compromise the security of CB schemes. Our work reveals the potential risk of existing CB systems theoretically and experimentally.Xingbo Dong and Jaewoo Park and Zhe Jin and Andrew Beng Jin Teoh and Massimo Tistarelli and KokSheik Wongwork_fy6e35zqonehxhimkd7fmevsbqThu, 29 Sep 2022 00:00:00 GMTOnline Facility Location with Linear Delay
https://scholar.archive.org/work/6t2sbmkn75er7otiteqzvduy3m
We study the problem of online facility location with delay. In this problem, a sequence of n clients appear in the metric space, and they need to be eventually connected to some open facility. The clients do not have to be connected immediately, but such a choice comes with a penalty: each client incurs a waiting cost (the difference between its arrival and connection time). At any point in time, an algorithm may decide to open a facility and connect any subset of clients to it. This is a well-studied problem both of its own, and within the general class of network design problems with delays. Our main focus is on a new variant of this problem, where clients may be connected also to an already open facility, but such action incurs an extra cost: an algorithm pays for waiting of the facility (a cost incurred separately for each such "late" connection). This is reminiscent of online matching with delays, where both sides of the connection incur a waiting cost. We call this variant two-sided delay to differentiate it from the previously studied one-sided delay. We present an O(1)-competitive deterministic algorithm for the two-sided delay variant. On the technical side, we study a greedy strategy, which grows budgets with increasing waiting delays and opens facilities for subsets of clients once sums of these budgets reach certain thresholds. Our technique is a substantial extension of the approach used by Jain, Mahdian and Saberi [STOC 2002] for analyzing the performance of offline algorithms for facility location. We then show how to transform our O(1)-competitive algorithm for the two-sided delay variant to O(log n / loglog n)-competitive deterministic algorithm for one-sided delays. We note that all previous online algorithms for problems with delays in general metrics have at least logarithmic ratios.Marcin Bienkowski, Martin Böhm, Jarosław Byrka, Jan Marcinkowskiwork_6t2sbmkn75er7otiteqzvduy3mWed, 28 Sep 2022 00:00:00 GMT