IA Scholar Query: Improved NP-Inapproximability for 2-Variable Linear Equations.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgThu, 15 Sep 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440The Biased Homogeneous r-Lin Problem
https://scholar.archive.org/work/wonnqyosi5cjln67qd3ys5q6ji
The p-biased Homogeneous r-Lin problem (Hom-r-Lin_p) is the following: given a homogeneous system of r-variable equations over m{F}₂, the goal is to find an assignment of relative weight p that satisfies the maximum number of equations. In a celebrated work, Håstad (JACM 2001) showed that the unconstrained variant of this i.e., Max-3-Lin, is hard to approximate beyond a factor of 1/2. This is also tight due to the naive random guessing algorithm which sets every variable uniformly from {0,1}. Subsequently, Holmerin and Khot (STOC 2004) showed that the same holds for the balanced Hom-r-Lin problem as well. In this work, we explore the approximability of the Hom-r-Lin_p problem beyond the balanced setting (i.e., p ≠ 1/2), and investigate whether the (p-biased) random guessing algorithm is optimal for every p. Our results include the following: - The Hom-r-Lin_p problem has no efficient 1/2 + 1/2 (1 - 2p)^{r-2} + ε-approximation algorithm for every p if r is even, and for p ∈ (0,1/2] if r is odd, unless NP ⊂ ∪_{ε>0}DTIME(2^{n^ε}). - For any r and any p, there exists an efficient 1/2 (1 - e^{-2})-approximation algorithm for Hom-r-Lin_p. We show that this is also tight for odd values of r (up to o_r(1)-additive factors) assuming the Unique Games Conjecture. Our results imply that when r is even, then for large values of r, random guessing is near optimal for every p. On the other hand, when r is odd, our results illustrate an interesting contrast between the regimes p ∈ (0,1/2) (where random guessing is near optimal) and p → 1 (where random guessing is far from optimal). A key technical contribution of our work is a generalization of Håstad's 3-query dictatorship test to the p-biased setting.Suprovat Ghoshal, Amit Chakrabarti, Chaitanya Swamywork_wonnqyosi5cjln67qd3ys5q6jiThu, 15 Sep 2022 00:00:00 GMTApproximating CSPs with Outliers
https://scholar.archive.org/work/eiki4obhgjd6plfuxtng4asrou
Constraint satisfaction problems (CSPs) are ubiquitous in theoretical computer science. We study the problem of Strong-CSP s, i.e. instances where a large induced sub-instance has a satisfying assignment. More formally, given a CSP instance 𝒢(V, E, [k], {Π_{ij}}_{(i,j) ∈ E}) consisting of a set of vertices V, a set of edges E, alphabet [k], a constraint Π_{ij} ⊂ [k] × [k] for each (i,j) ∈ E, the goal of this problem is to compute the largest subset S ⊆ V such that the instance induced on S has an assignment that satisfies all the constraints. In this paper, we study approximation algorithms for UniqueGames and related problems under the Strong-CSP framework when the underlying constraint graph satisfies mild expansion properties. In particular, we show that given a StrongUniqueGames instance whose optimal solution S^* is supported on a regular low threshold rank graph, there exists an algorithm that runs in time exponential in the threshold rank, and recovers a large satisfiable sub-instance whose size is independent on the label set size and maximum degree of the graph. Our algorithm combines the techniques of Barak-Raghavendra-Steurer (FOCS'11), Guruswami-Sinop (FOCS'11) with several new ideas and runs in time exponential in the threshold rank of the optimal set. A key component of our algorithm is a new threshold rank based spectral decomposition, which is used to compute a "large" induced subgraph of "small" threshold rank; our techniques build on the work of Oveis Gharan and Rezaei (SODA'17), and could be of independent interest.Suprovat Ghoshal, Anand Louis, Amit Chakrabarti, Chaitanya Swamywork_eiki4obhgjd6plfuxtng4asrouThu, 15 Sep 2022 00:00:00 GMTSequential Decision Making With Information Asymmetry (Invited Talk)
https://scholar.archive.org/work/kaiflhcnrbfinidftym2feaa5i
We survey some recent results in sequential decision making under uncertainty, where there is an information asymmetry among the decision-makers. We consider two versions of the problem: persuasion and mechanism design. In persuasion, a more-informed principal influences the actions of a less-informed agent by signaling information. In mechanism design, a less-informed principal incentivizes a more-informed agent to reveal information by committing to a mechanism, so that the principal can make more informed decisions. We define Markov persuasion processes and Markov mechanism processes that model persuasion and mechanism design into dynamic models. Then we survey results on optimal persuasion and optimal mechanism design on myopic and far-sighted agents. These problems are solvable in polynomial time for myopic agents but hard for far-sighted agents.Jiarui Gan, Rupak Majumdar, Goran Radanovic, Adish Singla, Bartek Klin, Sławomir Lasota, Anca Muschollwork_kaiflhcnrbfinidftym2feaa5iTue, 06 Sep 2022 00:00:00 GMTResolving Infeasibility of Linear Systems: A Parameterized Approach
https://scholar.archive.org/work/bjiqf3snbnab5jar2rvgxgtfby
Deciding feasibility of large systems of linear equations and inequalities is one of the most fundamental algorithmic tasks. However, due to data inaccuracies or modeling errors, in practical applications one often faces linear systems that are infeasible. Extensive theoretical and practical methods have been proposed for post-infeasibility analysis of linear systems. This generally amounts to detecting a feasibility blocker of small size k, which is a set of equations and inequalities whose removal or perturbation from the large system of size m yields a feasible system. This motivates a parameterized approach towards post-infeasibility analysis, where we aim to find a feasibility blocker of size at most k in fixed-parameter time f(k) · m^O(1). We establish parameterized intractability (W[1]- and NP-hardness) results already in very restricted settings for different choices of the parameters maximum size of a deletion set, number of positive/negative right-hand sides, treewidth, pathwidth and treedepth. Additionally, we rule out a polynomial compression for MinFB parameterized by the size of a deletion set and the number of negative right-hand sides. Furthermore, we develop fixed-parameter algorithms parameterized by various combinations of these parameters when every row of the system corresponds to a difference constraint. Our algorithms capture the case of Directed Feedback Arc Set, a fundamental parameterized problem whose fixed-parameter tractability was shown by Chen et al. (STOC 2008).Kristóf Bérczi, Alexander Göke, Lydia Mirabel Mendoza-Cadena, Matthias Mnichwork_bjiqf3snbnab5jar2rvgxgtfbyMon, 05 Sep 2022 00:00:00 GMTAlgorithmic Information Design in Multi-Player Games: Possibility and Limits in Singleton Congestion
https://scholar.archive.org/work/pnukf7wdyrbodmcxk5vstwg4ou
Most algorithmic studies on multi-agent information design so far have focused on the restricted situation with no inter-agent externalities; a few exceptions investigated truly strategic games such as zero-sum games and second-price auctions but have all focused only on optimal public signaling. This paper initiates the algorithmic information design of both public and private signaling in a fundamental class of games with negative externalities, i.e., singleton congestion games, with wide application in today's digital economy, machine scheduling, routing, etc. For both public and private signaling, we show that the optimal information design can be efficiently computed when the number of resources is a constant. To our knowledge, this is the first set of efficient exact algorithms for information design in succinctly representable many-player games. Our results hinge on novel techniques such as developing certain "reduced forms" to compactly characterize equilibria in public signaling or to represent players' marginal beliefs in private signaling. When there are many resources, we show computational intractability results. To overcome the issue of multiple equilibria, here we introduce a new notion of equilibrium-oblivious hardness, which rules out any possibility of computing a good signaling scheme, irrespective of the equilibrium selection rule.Chenghan Zhou and Thanh H. Nguyen and Haifeng Xuwork_pnukf7wdyrbodmcxk5vstwg4ouMon, 05 Sep 2022 00:00:00 GMTA Framework for Computing Greedy Clique Cover
https://scholar.archive.org/work/xliexmencrc3hg5puvvlli6tti
Structural parameters of graph (such as degeneracy and arboricity) had rarely been considered when designing algorithms for (edge) clique cover problems. Taking degeneracy of graph into account, we present a greedy framework and two fixed-parameter tractable algorithms for clique cover problems. We introduce a set theoretic concept and demonstrate its use in the computations of different objectives of clique cover. Furthermore, we show efficacy of our algorithms in practice.Ahammed Ullahwork_xliexmencrc3hg5puvvlli6ttiFri, 02 Sep 2022 00:00:00 GMTInapproximability of Counting Hypergraph Colourings
https://scholar.archive.org/work/isjlzjxaznakjgbquvxctzsrom
Recent developments in approximate counting have made startling progress in developing fast algorithmic methods for approximating the number of solutions to constraint satisfaction problems (CSPs) with large arities, using connections to the Lovász Local Lemma. Nevertheless, the boundaries of these methods for CSPs with non-Boolean domain are not well-understood. Our goal in this paper is to fill in this gap and obtain strong inapproximability results by studying the prototypical problem in this class of CSPs, hypergraph colourings. More precisely, we focus on the problem of approximately counting q -colourings on K -uniform hypergraphs with bounded degree Δ . An efficient algorithm exists if \(\Delta \lesssim \frac{q^{K/3-1}}{4^KK^2} \) (Jain, Pham, and Vuong, 2021; He, Sun, and Wu, 2021). Somewhat surprisingly however, a hardness bound is not known even for the easier problem of finding colourings. For the counting problem, the situation is even less clear and there is no evidence of the right constant controlling the growth of the exponent in terms of K . To this end, we first establish that for general q computational hardness for finding a colouring on simple/linear hypergraphs occurs at Δ ≳ Kq K , almost matching the algorithm from the Lovász Local Lemma. Our second and main contribution is to obtain a far more refined bound for the counting problem that goes well beyond the hardness of finding a colouring and which we conjecture is asymptotically tight (up to constant factors). We show in particular that for all even q ≥ 4 it is NP -hard to approximate the number of colourings when Δ ≳ q K /2 . Our approach is based on considering an auxiliary weighted binary CSP model on graphs, which is obtained by "halving" the K -ary hypergraph constraints. This allows us to utilise reduction techniques available for the graph case, which hinge upon understanding the behaviour on random regular bipartite graphs that serve as gadgets in the reduction. The major challenge in our setting is to analyse the induced matrix norm of the interaction matrix of the new CSP which captures the most likely solutions of the system. In contrast to previous analyses in the literature, the auxiliary CSP demonstrates both symmetry and asymmetry, making the analysis of the optimisation problem severely more complicated and demanding the combination of delicate perturbation arguments and careful asymptotic estimates.Andreas Galanis, Heng Guo, Jiaheng Wangwork_isjlzjxaznakjgbquvxctzsromFri, 02 Sep 2022 00:00:00 GMT(In-)Approximability Results for Interval, Resource Restricted, and Low Rank Scheduling
https://scholar.archive.org/work/zl3qazos7rgrnaw67xs76cdfuq
We consider variants of the restricted assignment problem where a set of jobs has to be assigned to a set of machines, for each job a size and a set of eligible machines is given, and the jobs may only be assigned to eligible machines with the goal of makespan minimization. For the variant with interval restrictions, where the machines can be arranged on a path such that each job is eligible on a subpath, we present the first better than 2-approximation and an improved inapproximability result. In particular, we give a (2-1/24)-approximation and show that no better than 9/8-approximation is possible, unless P=NP. Furthermore, we consider restricted assignment with R resource restrictions and rank D unrelated scheduling. In the former problem, a machine may process a job if it can meet its resource requirements regarding R (renewable) resources. In the latter, the size of a job is dependent on the machine it is assigned to and the corresponding processing time matrix has rank at most D. The problem with interval restrictions includes the 1 resource variant, is encompassed by the 2 resource variant, and regarding approximation the R resource variant is essentially a special case of the rank R+1 problem. We show that no better than 3/2, 8/7, and 3/2-approximation is possible (unless P=NP) for the 3 resource, 2 resource, and rank 3 variant, respectively. Both the approximation result for the interval case and the inapproximability result for the rank 3 variant are solutions to open challenges stated in previous works. Lastly, we also consider the reverse objective, that is, maximizing the minimal load any machine receives, and achieve similar results.Marten Maack, Simon Pukrop, Anna Rodriguez Rasmussen, Shiri Chechik, Gonzalo Navarro, Eva Rotenberg, Grzegorz Hermanwork_zl3qazos7rgrnaw67xs76cdfuqThu, 01 Sep 2022 00:00:00 GMTOptimal Scenario Reduction for One- and Two-Stage Robust Optimization
https://scholar.archive.org/work/a23fh7ntc5g3fopketdj6uliie
Robust optimization typically follows a worst-case perspective, where a single scenario may determine the objective value of a given solution. Accordingly, it is a challenging task to reduce the size of an uncertainty set without changing the resulting objective value too much. On the other hand, robust optimization problems with many scenarios tend to be hard to solve, in particular for two-stage problems. Hence, a reduced uncertainty set may be central to find solutions in reasonable time. We propose scenario reduction methods that give guarantees on the performance of the resulting robust solution. Scenario reduction problems for one- and two-stage robust optimization are framed as optimization problems that only depend on the uncertainty set and not on the underlying decision making problem. Experimental results indicate that objective values for the reduced uncertainty sets are closely correlated to original objective values, resulting in better solutions than when using general-purpose clustering methods such as K-means.Marc Goerigk, Mohammad Khosraviwork_a23fh7ntc5g3fopketdj6uliieThu, 01 Sep 2022 00:00:00 GMTCombinatorial Optimization via the Sum of Squares Hierarchy
https://scholar.archive.org/work/ksux7wlwmndldojrnagqqkvcdu
We study the Sum of Squares (SoS) Hierarchy with a view towards combinatorial optimization. We survey the use of the SoS hierarchy to obtain approximation algorithms on graphs using their spectral properties. We present a simplified proof of the result of Feige and Krauthgamer on the performance of the hierarchy for the Maximum Clique problem on random graphs. We also present a result of Guruswami and Sinop that shows how to obtain approximation algorithms for the Minimum Bisection problem on low threshold-rank graphs. We study inapproximability results for the SoS hierarchy for general constraint satisfaction problems and problems involving graph densities such as the Densest k-subgraph problem. We improve the existing inapproximability results for general constraint satisfaction problems in the case of large arity, using stronger probabilistic analyses of expansion of random instances. We examine connections between constraint satisfaction problems and density problems on graphs. Using them, we obtain new inapproximability results for the hierarchy for the Densest k-subhypergraph problem and the Minimum p-Union problem, which are proven via reductions. We also illustrate the relatively new idea of pseudocalibration to construct integrality gaps for the SoS hierarchy for Maximum Clique and Max K-CSP. The application to Max K-CSP that we present is known in the community but has not been presented before in the literature, to the best of our knowledge.Goutham Rajendranwork_ksux7wlwmndldojrnagqqkvcduThu, 01 Sep 2022 00:00:00 GMTThe Complexity of the Hausdorff Distance
https://scholar.archive.org/work/agchijsjlbflfejxzuspahjccy
We investigate the computational complexity of computing the Hausdorff distance. Specifically, we show that the decision problem of whether the Hausdorff distance of two semi-algebraic sets is bounded by a given threshold is complete for the complexity class ∀∃_<ℝ. This implies that the problem is NP-, co-NP-, ∃ℝ- and ∀ℝ-hard.Paul Jungeblut, Linda Kleist, Tillmann Miltzowwork_agchijsjlbflfejxzuspahjccyThu, 25 Aug 2022 00:00:00 GMTThe Computational Complexity of ReLU Network Training Parameterized by Data Dimensionality
https://scholar.archive.org/work/yopjyqefsvanxjh2epcq6wwtwa
Understanding the computational complexity of training simple neural networks with rectified linear units (ReLUs) has recently been a subject of intensive research. Closing gaps and complementing results from the literature, we present several results on the parameterized complexity of training two-layer ReLU networks with respect to various loss functions. After a brief discussion of other parameters, we focus on analyzing the influence of the dimension d of the training data on the computational complexity. We provide running time lower bounds in terms of W[1]-hardness for parameter d and prove that known brute-force strategies are essentially optimal (assuming the Exponential Time Hypothesis). In comparison with previous work, our results hold for a broad(er) range of loss functions, including lp-loss for all p ∈ [0, ∞]. In particular, we improve a known polynomial-time algorithm for constant d and convex loss functions to a more general class of loss functions, matching our running time lower bounds also in these cases.Vincent Froese, Christoph Hertrich, Rolf Niedermeierwork_yopjyqefsvanxjh2epcq6wwtwaMon, 22 Aug 2022 00:00:00 GMTInapproximability of counting hypergraph colourings
https://scholar.archive.org/work/nwlm7xpp3vblpotuud4mjnrtke
Recent developments in approximate counting have made startling progress in developing fast algorithmic methods for approximating the number of solutions to constraint satisfaction problems (CSPs) with large arities, using connections to the Lovasz Local Lemma. Nevertheless, the boundaries of these methods for CSPs with non-Boolean domain are not well-understood. Our goal in this paper is to fill in this gap and obtain strong inapproximability results by studying the prototypical problem in this class of CSPs, hypergraph colourings. More precisely, we focus on the problem of approximately counting q-colourings on K-uniform hypergraphs with bounded degree Δ. An efficient algorithm exists if Δ≲q^K/3-1/4^KK^2 (Jain, Pham, and Vuong, 2021; He, Sun, and Wu, 2021). Somewhat surprisingly however, a hardness bound is not known even for the easier problem of finding colourings. For the counting problem, the situation is even less clear and there is no evidence of the right constant controlling the growth of the exponent in terms of K. To this end, we first establish that for general q computational hardness for finding a colouring on simple/linear hypergraphs occurs at Δ≳ Kq^K, almost matching the algorithm from the Lovasz Local Lemma. Our second and main contribution is to obtain a far more refined bound for the counting problem that goes well beyond the hardness of finding a colouring and which we conjecture that is asymptotically tight (up to constant factors). We show in particular that for all even q≥ 4 it is NP-hard to approximate the number of colourings when Δ≳ q^K/2.Andreas Galanis, Heng Guo, Jiaheng Wangwork_nwlm7xpp3vblpotuud4mjnrtkeSat, 20 Aug 2022 00:00:00 GMTThe Simultaneous Assignment Problem
https://scholar.archive.org/work/wt2j6qnvijguplwmm66pjagkxa
This paper introduces the Simultaneous Assignment Problem. Here, we are given an assignment problem on some of the subgraphs of a given graph, and we are looking for a heaviest assignment which is feasible when restricted to any of the assignment problems. More precisely, we are given a graph with a weight- and a capacity function on its edges and a set of its subgraphs H_1,...,H_k along with a degree upper bound function for each of them. In addition, we are also given a laminar system on the node set with an upper bound on the degree-sum of the nodes in each set in the system. We want to assign each edge a non-negative integer below its capacity such that the total weight is maximized, the degrees in each subgraph are below the degree upper bound associated with the subgraph, and the degree-sum bound is respected in each set of the laminar system. The problem is shown to be APX-hard in the unweighted case even if the graph is a forest and k=2. This also implies that the Distance matching problem is APX-hard in the weighted case and that the Cyclic distance matching problem is APX-hard in the unweighted case. We identify multiple special cases when the problem can be solved in strongly polynomial time. One of these cases, the so-called locally laminar case, is a common generalization of the Hierarchical b-matching problem and the Laminar matchoid problem, and it implies that both of these problems can be solved efficiently in the weighted, capacitated case – improving upon the most general polynomial-time algorithms for these problems. The problem can be constant approximated when k is a constant, and we show that the approximation factor matches the integrality gap of a strengthened LP-relaxation for small k. We give improved approximation algorithms for special cases, for example, when the degree bounds are uniform or the graph is sparse.Péter Madarasiwork_wt2j6qnvijguplwmm66pjagkxaTue, 09 Aug 2022 00:00:00 GMTMaximizing Fair Content Spread via Edge Suggestion in Social Networks
https://scholar.archive.org/work/g3q3qqxhdrh7fbnaky7u637oby
Content spread inequity is a potential unfairness issue in online social networks, disparately impacting minority groups. In this paper, we view friendship suggestion, a common feature in social network platforms, as an opportunity to achieve an equitable spread of content. In particular, we propose to suggest a subset of potential edges (currently not existing in the network but likely to be accepted) that maximizes content spread while achieving fairness. Instead of re-engineering the existing systems, our proposal builds a fairness wrapper on top of the existing friendship suggestion components. We prove the problem is NP-hard and inapproximable in polynomial time unless P = NP. Therefore, allowing relaxation of the fairness constraint, we propose an algorithm based on LP-relaxation and randomized rounding with fixed approximation ratios on fairness and content spread. We provide multiple optimizations, further improving the performance of our algorithm in practice. Besides, we propose a scalable algorithm that dynamically adds subsets of nodes, chosen via iterative sampling, and solves smaller problems corresponding to these nodes. Besides theoretical analysis, we conduct comprehensive experiments on real and synthetic data sets. Across different settings, our algorithms found solutions with nearzero unfairness while significantly increasing the content spread. Our scalable algorithm could process a graph with half a million nodes on a single machine, reducing the unfairness to around 0.0004 while lifting content spread by 43%.Ian P. Swift, Sana Ebrahimi, Azade Nova, Abolfazl Asudehwork_g3q3qqxhdrh7fbnaky7u637obySat, 06 Aug 2022 00:00:00 GMTOn Lower Bounds of Approximating Parameterized k-Clique
https://scholar.archive.org/work/zr3wud6tirc3tjm2vcnifyc5ee
Given a simple graph G and an integer k, the goal of k-Clique problem is to decide if G contains a complete subgraph of size k. We say an algorithm approximates k-Clique within a factor g(k) if it can find a clique of size at least k / g(k) when G is guaranteed to have a k-clique. Recently, it was shown that approximating k-Clique within a constant factor is W[1]-hard [Lin21]. We study the approximation of k-Clique under the Exponential Time Hypothesis (ETH). The reduction of [Lin21] already implies an n^Ω(√(log k))-time lower bound under ETH. We improve this lower bound to n^Ω(log k). Using the gap-amplification technique by expander graphs, we also prove that there is no k^o(1) factor FPT-approximation algorithm for k-Clique under ETH. We also suggest a new way to prove the Parameterized Inapproximability Hypothesis (PIH) under ETH. We show that if there is no n^O(k/log k) algorithm to approximate k-Clique within a constant factor, then PIH is true.Bingkai Lin, Xuandi Ren, Yican Sun, Xiuhan Wangwork_zr3wud6tirc3tjm2vcnifyc5eeWed, 03 Aug 2022 00:00:00 GMTEfficiently Computing Nash Equilibria in Adversarial Team Markov Games
https://scholar.archive.org/work/n6lwxsnznfguveunlemleg67gq
Computing Nash equilibrium policies is a central problem in multi-agent reinforcement learning that has received extensive attention both in theory and in practice. However, provable guarantees have been thus far either limited to fully competitive or cooperative scenarios or impose strong assumptions that are difficult to meet in most practical applications. In this work, we depart from those prior results by investigating infinite-horizon adversarial team Markov games, a natural and well-motivated class of games in which a team of identically-interested players – in the absence of any explicit coordination or communication – is competing against an adversarial player. This setting allows for a unifying treatment of zero-sum Markov games and Markov potential games, and serves as a step to model more realistic strategic interactions that feature both competing and cooperative interests. Our main contribution is the first algorithm for computing stationary ϵ-approximate Nash equilibria in adversarial team Markov games with computational complexity that is polynomial in all the natural parameters of the game, as well as 1/ϵ. The proposed algorithm is particularly natural and practical, and it is based on performing independent policy gradient steps for each player in the team, in tandem with best responses from the side of the adversary; in turn, the policy for the adversary is then obtained by solving a carefully constructed linear program. Our analysis leverages non-standard techniques to establish the KKT optimality conditions for a nonlinear program with nonconvex constraints, thereby leading to a natural interpretation of the induced Lagrange multipliers. Along the way, we significantly extend an important characterization of optimal policies in adversarial (normal-form) team games due to Von Stengel and Koller (GEB '97).Fivos Kalogiannis, Ioannis Anagnostides, Ioannis Panageas, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Vaggos Chatziafratis, Stelios Stavroulakiswork_n6lwxsnznfguveunlemleg67gqWed, 03 Aug 2022 00:00:00 GMTTowards a Theory of Maximal Extractable Value I: Constant Function Market Makers
https://scholar.archive.org/work/xl6kjs3yirfddp37b2g5ncpjxi
Maximal Extractable Value (MEV) represents excess value captured by miners (or validators) from users in a cryptocurrency network. This excess value often comes from reordering users transactions to maximize fees or inserting new transactions that allow a miner to front-run users' transactions. The most common type of MEV involves what is known as a sandwich attack against a user trading on a popular class of automated market makers known as CFMMs. In this first paper of a series on MEV, we analyze game theoretic properties of MEV in CFMMs that we call reordering and routing MEV. In the case of reordering, we show conditions when the maximum price impact caused by the reordering of sandwich attacks in a sequence of trades relative to the average price impact is O(log n) in the number of user trades. In the case of routing, we present examples where the existence of MEV both degrades and counterintuitively improves the quality of routing. We construct an analogue of the price of anarchy for this setting and demonstrate that if the impact of a sandwich attack is localized in a suitable sense, then the price of anarchy is constant. Combined, our results provide improvements that both MEV searchers and CFMM designers can utilize for estimating costs and profits of MEV.Kshitij Kulkarni and Theo Diamandis and Tarun Chitrawork_xl6kjs3yirfddp37b2g5ncpjxiSun, 24 Jul 2022 00:00:00 GMTMaximizing coverage while ensuring fairness: a tale of conflicting objective
https://scholar.archive.org/work/34lzbzaa6natfguedzhslzxb24
Ensuring fairness in computational problems has emerged as a key topic during recent years, buoyed by considerations for equitable resource distributions and social justice. It is possible to incorporate fairness in computational problems from several perspectives, such as using optimization, game-theoretic or machine learning frameworks. In this paper we address the problem of incorporation of fairness from a combinatorial optimization perspective. We formulate a combinatorial optimization framework, suitable for analysis by researchers in approximation algorithms and related areas, that incorporates fairness in maximum coverage problems as an interplay between two conflicting objectives. Fairness is imposed in coverage by using coloring constraints that minimizes the discrepancies between number of elements of different colors covered by selected sets; this is in contrast to the usual discrepancy minimization problems studied extensively in the literature where (usually two) colors are not given a priori but need to be selected to minimize the maximum color discrepancy of each individual set. Our main results are a set of randomized and deterministic approximation algorithms that attempts to simultaneously approximate both fairness and coverage in this framework.Abolfazl Asudeh and Tanya Berger-Wolf and Bhaskar DasGupta and Anastasios Sidiropouloswork_34lzbzaa6natfguedzhslzxb24Tue, 19 Jul 2022 00:00:00 GMTSequential Competitive Facility Location: Exact and Approximate Algorithms
https://scholar.archive.org/work/3y7vv37scvhzpei4jc5cufjjpu
We study a competitive facility location problem (CFLP), where two firms sequentially open new facilities within their budgets, in order to maximize their market shares of demand that follows a probabilistic choice model. This process is a Stackelberg game and admits a bilevel mixed-integer nonlinear program (MINLP) formulation. We derive an equivalent, single-level MINLP reformulation and exploit the problem structures to derive two valid inequalities, based on submodularity and concave overestimation, respectively. We use the two valid inequalities in a branch-and-cut algorithm to find globally optimal solutions. Then, we propose an approximation algorithm to find good-quality solutions with a constant approximation guarantee. We develop several extensions by considering general facility-opening costs, outside competitors, as well as diverse facility-planning decisions, and discuss solution approaches for each extension. We conduct numerical studies to demonstrate that the exact algorithm significantly accelerates the computation of CFLP on large-sized instances that have not been solved optimally or even heuristically by existing methods, and the approximation algorithm can quickly find high-quality solutions. We derive managerial insights based on sensitivity analysis of different settings that affect customers' probabilistic choices and the ensuing demand.Mingyao Qi, Ruiwei Jiang, Siqian Shenwork_3y7vv37scvhzpei4jc5cufjjpuSun, 17 Jul 2022 00:00:00 GMT