A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is `application/pdf`

.

## Filters

##
###
Sampling Arborescences in Parallel
[article]

2020
*
arXiv
*
pre-print

More recently

arXiv:2012.09502v1
fatcat:p3e54gol4jbblkhol3bc3vpm4y
*Anari*, Liu, Oveis Gharan, and Vinzant [Ana+20] improved this to nearly-linear time. Many of these works are based on speeding up the Aldous-Broder algorithm. ...##
###
Graph Clustering using Effective Resistance
[article]

2017
*
arXiv
*
pre-print

#1#1 We design a polynomial time algorithm that for any weighted undirected graph G = (V, E, w) and sufficiently large δ > 1, partitions V into subsets V_1, ..., V_h for some h≥ 1, such that ∙ at most δ^-1 fraction of the weights are between clusters, i.e. w(E - ∪_i = 1^h E(V_i)) ≲w(E)/δ; ∙ the effective resistance diameter of each of the induced subgraphs G[V_i] is at most δ^3 times the average weighted degree, i.e. _u, v ∈ V_iReff_G[V_i](u, v) ≲δ^3 ·|V|/w(E) for all i=1, ..., h. In

arXiv:1711.06530v1
fatcat:cqe3mq5dyjf2tm7eugmyepuxqi
## more »

... ., h. In particular, it is possible to remove one percent of weight of edges of any given graph such that each of the resulting connected components has effective resistance diameter at most the inverse of the average weighted degree. Our proof is based on a new connection between effective resistance and low conductance sets. We show that if the effective resistance between two vertices u and v is large, then there must be a low conductance cut separating u from v. This implies that very mildly expanding graphs have constant effective resistance diameter. We believe that this connection could be of independent interest in algorithm design.##
###
Instance Based Approximations to Profile Maximum Likelihood
[article]

2020
*
arXiv
*
pre-print

*Anari*, Moses Charikar, Kirankumar Shiragur, and Aaron Sidford. ... Appendix B.4, we provide experiments for other distributions, compare the performance of the PseudoPML approach implemented using our algorithm with a heuristic approximate PML References [ACSS20]

*Nima*...

##
###
Budget Feasible Procurement Auctions

2018
*
Operations Research
*

We consider a simple and well-studied model for procurement problems and solve it to optimality. A buyer with a fixed budget wants to procure, from a set of available workers, a budget feasible subset that maximizes her utility: Any worker has a private reservation price and provides a publicly known utility to the buyer in case of being procured. The buyer's utility function is additive over items. The goal is designing a direct revelation mechanism that solicits workers' reservation prices

doi:10.1287/opre.2017.1693
fatcat:2jze4wzncnbnbndyssbsl3xkfa
## more »

... servation prices and decides which workers to recruit and how much to pay them. Moreover, the mechanism has to maximize the buyer's utility without violating her budget constraint. We study this problem in the prior-free setting; our main contribution is finding the optimal mechanism in this setting, under the "Small Bidders" assumption. This assumption, also known as the "small bid to budget ratio assumption", states that the bid of each seller is small compared to the buyer's budget. We also study a more general class of utility functions: submodular utility functions. For this class, we improve the existing mechanisms significantly under our assumption. J Hoeffding Bounds 90 We require any mechanism M = (A, P ) to satisfy the following properties: 1. Budget Feasibility: The sum of the payments made to the sellers should not exceed B, i.e., 2. Individual rationality: A winner i ∈ S is paid at least c i . 3. Truthfulness/Incentive-Compatibility: Reporting the true cost should be a dominant strategy for sellers, i.e. for all non-truthful reports c i from seller i, it holds that Defining a Benchmark. Among all mechanisms that satisfy the above properties, we are interested to the one that maximizes the utility of the buyer with respect to the following benchmark. Let U * (c, u) denote the utility of the omniscient mechanism, i.e. the utility of the knapsack optimization problem assuming that costs of the sellers are known to the buyer. 2 When there is no risk of confusion, we also denote U * (c, u) by U * for brevity. Definition 1. A mechanism M is α-competitive when α ∈ [0, 1] is the largest scalar for which the mechanism derives utility at least α · U * (c, u) for all c and u. Our main contribution is finding the mechanism that attains the highest possible competitive ratio in the class of truthful mechanisms. The Small Bidders Assumption Our small bidders assumption states that the cost of a single item is very small compared to the buyer's budget B. The Small Bidders Assumption. Assume that c max B, where c max = max i∈S {c i }. An alternative way to write the assumption is c max = o(B); in other words, we define bid-budget ratio of the market to be θ = cmax B and analyze our mechanisms for when θ → 0. Our mechanisms, however, do not need "very small" θ to perform well; this is elaborated during the discussion of our results in Section 3, where we note that even for θ as large as 1/20 our mechanisms have a very close performance to the optimum performance.##
###
Equilibrium Pricing with Positive Externalities (Extended Abstract)
[chapter]

2010
*
Lecture Notes in Computer Science
*

We study the problem of selling an item to strategic buyers in the presence of positive historical externalities, where the value of a product increases as more people buy and use it. This increase in the value of the product is the result of resolving bugs or security holes after more usage. We consider a continuum of buyers that are partitioned into types where each type has a valuation function based on the actions of other buyers. Given a fixed sequence of prices, or price trajectory,

doi:10.1007/978-3-642-17572-5_35
fatcat:4aolx3syyrbrdlacqetmq6gbqy
## more »

... e trajectory, buyers choose a day on which to purchase the product, i.e., they have to decide whether to purchase the product early in the game or later after more people already own it. We model this strategic setting as a game, study existence and uniqueness of the equilibria, and design an FPTAS to compute an approximately revenue-maximizing pricing trajectory for the seller in two special cases: the symmetric settings in which there is just a single buyer type, and the linear settings that are characterized by an initial type-independent bias and a linear type-dependent influenceability coefficient.##
###
Learning Multimodal Rewards from Rankings
[article]

2021
*
arXiv
*
pre-print

Learning from human feedback has shown to be a useful approach in acquiring robot reward functions. However, expert feedback is often assumed to be drawn from an underlying unimodal reward function. This assumption does not always hold including in settings where multiple experts provide data or when a single expert provides data for different tasks -- we thus go beyond learning a unimodal reward and focus on learning a multimodal reward function. We formulate the multimodal reward learning as

arXiv:2109.12750v2
fatcat:sjjuye327zbyphtth2hdx6rdoa
## more »

... reward learning as a mixture learning problem and develop a novel ranking-based learning approach, where the experts are only required to rank a given set of trajectories. Furthermore, as access to interaction data is often expensive in robotics, we develop an active querying approach to accelerate the learning process. We conduct experiments and user studies using a multi-task variant of OpenAI's LunarLander and a real Fetch robot, where we collect data from multiple users with different preferences. The results suggest that our approach can efficiently learn multimodal reward functions, and improve data-efficiency over benchmark methods that we adapt to our learning problem.##
###
Approximating the Largest Root and Applications to Interlacing Families
[article]

2017
*
arXiv
*
pre-print

We study the problem of approximating the largest root of a real-rooted polynomial of degree $n$ using its top $k$ coefficients and give nearly matching upper and lower bounds. We present algorithms with running time polynomial in $k$ that use the top $k$ coefficients to approximate the maximum root within a factor of $n^{1/k}$ and $1+O(\tfrac{\log n}{k})^2$ when $k\leq \log n$ and $k>\log n$ respectively. We also prove corresponding information-theoretic lower bounds of $n^{\Omega(1/k)}$ and

arXiv:1704.03892v1
fatcat:bsnccg5bcvff7hbiqlbomorlt4
## more »

... \Omega(1/k)}$ and $1+\Omega\left(\frac{\log \frac{2n}{k}}{k}\right)^2$, and show strong lower bounds for noisy version of the problem in which one is given access to approximate coefficients. This problem has applications in the context of the method of interlacing families of polynomials, which was used for proving the existence of Ramanujan graphs of all degrees, the solution of the Kadison-Singer problem, and bounding the integrality gap of the asymmetric traveling salesman problem. All of these involve computing the maximum root of certain real-rooted polynomials for which the top few coefficients are accessible in subexponential time. Our results yield an algorithm with the running time of $2^{\tilde O(\sqrt[3]n)}$ for all of them.##
###
Euclidean movement minimization

2015
*
Journal of combinatorial optimization
*

We consider a class of optimization problems called movement minimization on euclidean plane. Given a set of nodes on the plane, the aim is to achieve some specific property by minimum movement of the nodes. We consider two specific properties, namely the connectivity (Con) and realization of a given topology (Topol). By minimum movement, we mean either the sum of all movements (Sum) or the maximum movement (Max). We obtain several approximation algorithms and some hardness results for these

doi:10.1007/s10878-015-9842-5
fatcat:ecsutts3onanpknsymelo4sr34
## more »

... esults for these four problems. We obtain an O(m)-factor approximation for ConMax and ConSum and an O( m/OP T )-factor approximation for Con-Max. We also extend some known result on graphical grounds in [1, 2] and obtain inapproximability results on the geometrical grounds. For the Topol problem (where the final decoration of the nodes must correspond to a given configuration), we find it much simpler and provide FPTAS for both Max and Sum versions. DIntroduction Consider a number of moveable robots distributed over a plane in a far-flung manner. Each robot has an antenna with a limited maximum range, denoted by r max . Robot s can communicate directly with robot t if and only if their distance is less than r max . Robot s can also communicate indirectly with t if there is an ordered set of robots s = r 1 , r 2 , · · · , r p = t so that each r i can directly communicate with r i+1 . With this explanation, we can form a dynamic graph whose vertices are the moveable robots on the plane and edges are formed by connecting each robot to every other robot residing in the disk with radius r max around it. These geometric graphs are called UDGs (Unit Disk Graphs##
###
A Generalization of Permanent Inequalities and Applications in Counting and Optimization
[article]

2017
*
arXiv
*
pre-print

A polynomial p∈R[z_1,...,z_n] is real stable if it has no roots in the upper-half complex plane. Gurvits's permanent inequality gives a lower bound on the coefficient of the z_1z_2... z_n monomial of a real stable polynomial p with nonnegative coefficients. This fundamental inequality has been used to attack several counting and optimization problems. Here, we study a more general question: Given a stable multilinear polynomial p with nonnegative coefficients and a set of monomials S, we show

arXiv:1702.02937v1
fatcat:yos2mgknknejnnq5fj3hnd4wiu
## more »

... omials S, we show that if the polynomial obtained by summing up all monomials in S is real stable, then we can lowerbound the sum of coefficients of monomials of p that are in S. We also prove generalizations of this theorem to (real stable) polynomials that are not multilinear. We use our theorem to give a new proof of Schrijver's inequality on the number of perfect matchings of a regular bipartite graph, generalize a recent result of Nikolov and Singh, and give deterministic polynomial time approximation algorithms for several counting problems.##
###
A Tight Analysis of Bethe Approximation for Permanent
[article]

2019
*
arXiv
*
pre-print

For example,

arXiv:1811.02933v2
fatcat:7hxms2beffdynjxhgl5pygrthi
*Anari*et al. ... For two recent alternative proofs see Csikvári [Csi14] and*Anari*and Oveis Gharan [AO17] . The matrix P maximizing β(A, P) is not necessarily obtained from A by matrix scaling. ...##
###
Nearly Optimal Pricing Algorithms for Production Constrained and Laminar Bayesian Selection
[article]

2018
*
arXiv
*
pre-print

We study online pricing algorithms for the Bayesian selection problem with production constraints and its generalization to the laminar matroid Bayesian online selection problem. Consider a firm producing (or receiving) multiple copies of different product types over time. The firm can offer the products to arriving buyers, where each buyer is interested in one product type and has a private valuation drawn independently from a possibly different but known distribution. Our goal is to find an

arXiv:1807.05477v1
fatcat:iedibio4bbca5iqiadxrkhggni
## more »

... oal is to find an adaptive pricing for serving the buyers that maximizes the expected social-welfare (or revenue) subject to two constraints. First, at any time the total number of sold items of each type is no more than the number of produced items. Second, the total number of sold items does not exceed the total shipping capacity. This problem is a special case of the well-known matroid Bayesian online selection problem studied in [Kleinberg and Weinberg, 2012], when the underlying matroid is laminar. We give the first Polynomial-Time Approximation Scheme (PTAS) for the above problem as well as its generalization to the laminar matroid Bayesian online selection problem when the depth of the laminar family is bounded by a constant. Our approach is based on rounding the solution of a hierarchy of linear programming relaxations that systematically strengthen the commonly used ex-ante linear programming formulation of these problems and approximate the optimum online solution with any degree of accuracy. Our rounding algorithm respects the relaxed constraints of higher-levels of the laminar tree only in expectation, and exploits the negative dependency of the selection rule of lower-levels to achieve the required concentration that guarantees the feasibility with high probability.##
###
Simply Exponential Approximation of the Permanent of Positive Semidefinite Matrices
[article]

2017
*
arXiv
*
pre-print

We design a deterministic polynomial time c^n approximation algorithm for the permanent of positive semidefinite matrices where c=e^γ+1≃ 4.84. We write a natural convex relaxation and show that its optimum solution gives a c^n approximation of the permanent. We further show that this factor is asymptotically tight by constructing a family of positive semidefinite matrices.

arXiv:1704.03486v1
fatcat:bspbnw7jlrdoznuqyiqnoasi7u
##
###
Isotropy and Log-Concave Polynomials: Accelerated Sampling and High-Precision Counting of Matroid Bases
[article]

2020
*
arXiv
*
pre-print

*Anari*, Liu, Oveis Gharan, and Vinzant [Ana+19] used natural random walks studied in the context of high-dimensional expanders [KM16; DK17; KO20] to show that distributions µ with a log-concave generating ...

##
###
Smoothed Analysis of Discrete Tensor Decomposition and Assemblies of Neurons
[article]

2018
*
arXiv
*
pre-print

We analyze linear independence of rank one tensors produced by tensor powers of randomly perturbed vectors. This enables efficient decomposition of sums of high-order tensors. Our analysis builds upon [BCMV14] but allows for a wider range of perturbation models, including discrete ones. We give an application to recovering assemblies of neurons. Assemblies are large sets of neurons representing specific memories or concepts. The size of the intersection of two assemblies has been shown in

arXiv:1810.11896v1
fatcat:m6kpnq25tzgjtinm3cykwgzv2i
## more »

... been shown in experiments to represent the extent to which these memories co-occur or these concepts are related; the phenomenon is called association of assemblies. This suggests that an animal's memory is a complex web of associations, and poses the problem of recovering this representation from cognitive data. Motivated by this problem, we study the following more general question: Can we reconstruct the Venn diagram of a family of sets, given the sizes of their ℓ-wise intersections? We show that as long as the family of sets is randomly perturbed, it is enough for the number of measurements to be polynomially larger than the number of nonempty regions of the Venn diagram to fully reconstruct the diagram.##
###
Nash Social Welfare, Matrix Permanent, and Stable Polynomials
[article]

2016
*
arXiv
*
pre-print

We study the problem of allocating m items to n agents subject to maximizing the Nash social welfare (NSW) objective. We write a novel convex programming relaxation for this problem, and we show that a simple randomized rounding algorithm gives a 1/e approximation factor of the objective. Our main technical contribution is an extension of Gurvits's lower bound on the coefficient of the square-free monomial of a degree m-homogeneous stable polynomial on m variables to all homogeneous

arXiv:1609.07056v2
fatcat:gg5uimranzeytkdlni66qzjcfm
## more »

... geneous polynomials. We use this extension to analyze the expected welfare of the allocation returned by our randomized rounding algorithm.
« Previous

*Showing results 1 — 15 out of 68 results*