A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is `application/pdf`

.

## Filters

##
###
Revealed Preference Dimension via Matrix Sign Rank
[article]

2018
*
arXiv
*
pre-print

Given a data-set of consumer behaviour, the Revealed Preference Graph succinctly encodes inferred relative preferences between observed outcomes as a directed graph. Not all graphs can be constructed as revealed preference graphs when the market dimension is fixed. This paper solves the open problem of determining exactly which graphs are attainable as revealed preference graphs in d-dimensional markets. This is achieved via an exact characterization which closely ties the feasibility of the

arXiv:1807.10878v1
fatcat:wnlucwpvy5dwre2rcddy6aobda
## more »

... asibility of the graph to the Matrix Sign Rank of its signed adjacency matrix. The paper also shows that when the preference relations form a partially ordered set with order-dimension k, the graph is attainable as a revealed preference graph in a k-dimensional market.##
###
Smoothed Complexity of 2-player Nash Equilibria
[article]

2020
*
arXiv
*
pre-print

We prove that computing a Nash equilibrium of a two-player (n × n) game with payoffs in [-1,1] is PPAD-hard (under randomized reductions) even in the smoothed analysis setting, smoothing with noise of constant magnitude. This gives a strong negative answer to conjectures of Spielman and Teng [ST06] and Cheng, Deng, and Teng [CDT09]. In contrast to prior work proving PPAD-hardness after smoothing by noise of magnitude 1/poly(n) [CDT09], our smoothed complexity result is not proved via hardness

arXiv:2007.10857v1
fatcat:i7tgbussgrcwxk3rva6vzwhrwm
## more »

... oved via hardness of approximation for Nash equilibria. This is by necessity, since Nash equilibria can be approximated to constant error in quasi-polynomial time [LMM03]. Our results therefore separate smoothed complexity and hardness of approximation for Nash equilibria in two-player games. The key ingredient in our reduction is the use of a random zero-sum game as a gadget to produce two-player games which remain hard even after smoothing. Our analysis crucially shows that all Nash equilibria of random zero-sum games are far from pure (with high probability), and that this remains true even after smoothing.##
###
Pandora's Box Problem with Order Constraints
[article]

2020
*
arXiv
*
pre-print

The Pandora's Box Problem, originally formalized by Weitzman in 1979, models selection from set of random, alternative options, when evaluation is costly. This includes, for example, the problem of hiring a skilled worker, where only one hire can be made, but the evaluation of each candidate is an expensive procedure. Weitzman showed that the Pandora's Box Problem admits an elegant, simple solution, where the options are considered in decreasing order of reservation value,i.e., the value that

arXiv:2002.06968v2
fatcat:erj35z7gljad3ojpvraapwxoem
## more »

... ., the value that reduces to zero the expected marginal gain for opening the box. We study for the first time this problem when order - or precedence - constraints are imposed between the boxes. We show that, despite the difficulty of defining reservation values for the boxes which take into account both in-depth and in-breath exploration of the various options, greedy optimal strategies exist and can be efficiently computed for tree-like order constraints. We also prove that finding approximately optimal adaptive search strategies is NP-hard when certain matroid constraints are used to further restrict the set of boxes which may be opened, or when the order constraints are given as reachability constraints on a DAG. We complement the above result by giving approximate adaptive search strategies based on a connection between optimal adaptive strategies and non-adaptive strategies with bounded adaptivity gap for a carefully relaxed version of the problem.##
###
Performance Metric Elicitation from Pairwise Classifier Comparisons
[article]

2019
*
arXiv
*
pre-print

Given a binary prediction problem, which performance metric should the classifier optimize? We address this question by formalizing the problem of Metric Elicitation. The goal of metric elicitation is to discover the performance metric of a practitioner, which reflects her innate rewards (costs) for correct (incorrect) classification. In particular, we focus on eliciting binary classification performance metrics from pairwise feedback, where a practitioner is queried to provide relative

arXiv:1806.01827v2
fatcat:d2jguj5qhndrjkmjinge2vnnuq
## more »

... de relative preference between two classifiers. By exploiting key geometric properties of the space of confusion matrices, we obtain provably query efficient algorithms for eliciting linear and linear-fractional performance metrics. We further show that our method is robust to feedback and finite sample noise.##
###
Testing Consumer Rationality using Perfect Graphs and Oriented Discs
[article]

2015
*
arXiv
*
pre-print

Given a consumer data-set, the axioms of revealed preference proffer a binary test for rational behaviour. A natural (non-binary) measure of the degree of rationality exhibited by the consumer is the minimum number of data points whose removal induces a rationalisable data-set.We study the computational complexity of the resultant consumer rationality problem in this paper. This problem is, in the worst case, equivalent (in terms of approximation) to the directed feedback vertex set problem.

arXiv:1507.07581v2
fatcat:kv3zr4cbrnayll6vd5hoq6hfju
## more »

... tex set problem. Our main result is to obtain an exact threshold on the number of commodities that separates easy cases and hard cases. Specifically, for two-commodity markets the consumer rationality problem is polynomial time solvable; we prove this via a reduction to the vertex cover problem on perfect graphs. For three-commodity markets, however, the problem is NP-complete; we prove thisusing a reduction from planar 3-SAT that is based upon oriented-disc drawings.##
###
Testing Consumer Rationality Using Perfect Graphs and Oriented Discs
[chapter]

2015
*
Lecture Notes in Computer Science
*

Given a consumer data-set, the axioms of revealed preference proffer a binary test for rational behaviour. A natural (non-binary) measure of the degree of rationality exhibited by the consumer is the minimum number of data points whose removal induces a rationalisable data-set. We study the computational complexity of the resultant consumer rationality problem in this paper. We explain how to formulate this problem in terms of a directed revealed preference graph and show, for markets with a

doi:10.1007/978-3-662-48995-6_14
fatcat:2lw4qsdltvenrj63n35jlr7omi
## more »

... r markets with a large number of commodities, that it is equivalent (in terms of approximation) to the directed feedback vertex set problem. Our main result is to obtain an exact threshold on the number of commodities that separates easy cases and hard cases. Specifically, for two-commodity markets the consumer rationality problem is polynomial time solvable; we prove this via a reduction to the vertex cover problem on perfect graphs. For three-commodity markets, however, the problem is NP-complete; we prove this using a reduction from planar 3-sat that is based upon oriented-disc drawings.##
###
Online Revenue Maximization for Server Pricing
[article]

2019
*
arXiv
*
pre-print

Efficient and truthful mechanisms to price resources on remote servers/machines has been the subject of much work in recent years due to the importance of the cloud market. This paper considers revenue maximization in the online stochastic setting with non-preemptive jobs and a unit capacity server. One agent/job arrives at every time step, with parameters drawn from an underlying unknown distribution. We design a posted-price mechanism which can be efficiently computed, and is revenue-optimal

arXiv:1906.09880v3
fatcat:wj4rmjco2neqfjlto4lc45n37e
## more »

... is revenue-optimal in expectation and in retrospect, up to additive error. The prices are posted prior to learning the agent's type, and the computed pricing scheme is deterministic, depending only on the length of the allotted time interval and on the earliest time the server is available. If the distribution of agent's type is only learned from observing the jobs that are executed, we prove that a polynomial number of samples is sufficient to obtain a near-optimal truthful pricing strategy.##
###
Polynomial Time Algorithms to Find an Approximate Competitive Equilibrium for Chores
[article]

2021
*
arXiv
*
pre-print

Competitive equilibrium with equal income (CEEI) is considered one of the best mechanisms to allocate a set of items among agents fairly and efficiently. In this paper, we study the computation of CEEI when items are chores that are disliked (negatively valued) by agents, under 1-homogeneous and concave utility functions which includes linear functions as a subcase. It is well-known that, even with linear utilities, the set of CEEI may be non-convex and disconnected, and the problem is

arXiv:2107.06649v2
fatcat:hjcrdvbopfhcbk34b6grdol2iq
## more »

... problem is PPAD-hard in the more general exchange model. In contrast to these negative results, we design FPTAS: A polynomial-time algorithm to compute ϵ-approximate CEEI where the running-time depends polynomially on 1/ϵ. Our algorithm relies on the recent characterization due to Bogomolnaia et al. (2017) of the CEEI set as exactly the KKT points of a non-convex minimization problem that have all coordinates non-zero. Due to this non-zero constraint, naive gradient-based methods fail to find the desired local minima as they are attracted towards zero. We develop an exterior-point method that alternates between guessing non-zero KKT points and maximizing the objective along supporting hyperplanes at these points. We show that this procedure must converge quickly to an approximate KKT point which then can be mapped to an approximate CEEI; this exterior point method may be of independent interest. When utility functions are linear, we give explicit procedures for finding the exact iterates, and as a result show that a stronger form of approximate CEEI can be found in polynomial time. Finally, we note that our algorithm extends to the setting of un-equal incomes (CE), and to mixed manna with linear utilities where each agent may like (positively value) some items and dislike (negatively value) others.##
###
Online Revenue Maximization for Server Pricing

2020
*
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
*

Efficient and truthful mechanisms to price time on remote servers/machines have been the subject of much work in recent years due to the importance of the cloud market. This paper considers online revenue maximization for a unit capacity server, when jobs are non preemptive, in the Bayesian setting: at each time step, one job arrives, with parameters drawn from an underlying distribution. We design an efficiently computable truthful posted price mechanism, which maximizes revenue in expectation

doi:10.24963/ijcai.2020/564
dblp:conf/ijcai/HollerBB20
fatcat:b67uqdmi4rh3jhmwcunmex4g3e
## more »

... enue in expectation and in retrospect, up to additive error. The prices are posted prior to learning the agent's type, and the computed pricing scheme is deterministic. We also show the pricing mechanism is robust to learning the job distribution from samples, where polynomially many samples suffice to obtain near optimal prices.##
###
Smoothed Efficient Algorithms and Reductions for Network Coordination Games

2020
*
Innovations in Theoretical Computer Science
*

We study the smoothed complexity of finding pure Nash equilibria in Network Coordination Games, a PLS-complete problem in the worst case, even when each player has two strategies. This is a potential game where the sequential-better-response algorithm is known to converge to a pure NE, albeit in exponential time. First, we prove polynomial (respectively, quasi-polynomial) smoothed complexity when the underlying game graph is complete (resp. arbitrary), and every player has constantly many

doi:10.4230/lipics.itcs.2020.73
dblp:conf/innovations/BoodaghiansKM20
fatcat:ku274ytagreeff77sy7eg5lnua
## more »

... nstantly many strategies. The complete graph assumption is reminiscent of perturbing all parameters, a common assumption in most known polynomial smoothed complexity results. We develop techniques to bound the probability that an (adversarial) better-response sequence makes slow improvements to the potential. Our approach combines and generalizes the local-max-cut approaches of Etscheid and Röglin (SODA '14; ACM TALG, '17) and Angel, Bubeck, Peres, and Wei (STOC '17), to handle the multi-strategy case. We believe that the approach and notions developed herein could be of interest in addressing the smoothed complexity of other potential games. Further, we define a notion of a smoothness-preserving reduction among search problems, and obtain reductions from 2-strategy network coordination games to local-max-cut, and from k-strategy games (k arbitrary) to local-max-bisection. The former, with the recent result of Bibak, Chandrasekaran, and Carlson (SODA '18) gives an alternate O(n 8 )-time smoothed algorithm when k = 2. These reductions extend smoothed efficient algorithms from one problem to another. ACM Subject Classification Theory of computation → Algorithmic game theory##
###
The Combinatorial World (of Auctions) According to GARP
[chapter]

2015
*
Lecture Notes in Computer Science
*

Revealed preference techniques are used to test whether a data set is compatible with rational behaviour. They are also incorporated as constraints in mechanism design to encourage truthful behaviour in applications such as combinatorial auctions. In the auction setting, we present an efficient combinatorial algorithm to find a virtual valuation function with the optimal (additive) rationality guarantee. Moreover, we show that there exists such a valuation function that both is individually

doi:10.1007/978-3-662-48433-3_10
fatcat:vik3mz7zhndfhoszt3z7rfzpry
## more »

... is individually rational and is minimum (that is, it is component-wise dominated by any other individually rational, virtual valuation function that approximately fits the data). Similarly, given upper bound constraints on the valuation function, we show how to fit the maximum virtual valuation function with the optimal additive rationality guarantee. In practice, revealed preference bidding constraints are very demanding. We explain how approximate rationality can be used to create relaxed revealed preference constraints in an auction. We then show how combinatorial methods can be used to implement these relaxed constraints. Worst/best-case welfare guarantees that result from the use of such mechanisms can be quantified via the minimum/maximum virtual valuation function.##
###
Smoothed Efficient Algorithms and Reductions for Network Coordination Games
[article]

2019
*
arXiv
*
pre-print

Worst-case hardness results for most equilibrium computation problems have raised the need for beyond-worst-case analysis. To this end, we study the smoothed complexity of finding pure Nash equilibria in Network Coordination Games, a PLS-complete problem in the worst case. This is a potential game where the sequential-better-response algorithm is known to converge to a pure NE, albeit in exponential time. First, we prove polynomial (resp. quasi-polynomial) smoothed complexity when the

arXiv:1809.02280v4
fatcat:u5iezhj4j5b4vjuf3fzyq4qgji
## more »

... when the underlying game graph is a complete (resp. arbitrary) graph, and every player has constantly many strategies. We note that the complete graph case is reminiscent of perturbing all parameters, a common assumption in most known smoothed analysis results. Second, we define a notion of smoothness-preserving reduction among search problems, and obtain reductions from 2-strategy network coordination games to local-max-cut, and from k-strategy games (with arbitrary k) to local-max-cut up to two flips. The former together with the recent result of [BCC18] gives an alternate O(n^8)-time smoothed algorithm for the 2-strategy case. This notion of reduction allows for the extension of smoothed efficient algorithms from one problem to another. For the first set of results, we develop techniques to bound the probability that an (adversarial) better-response sequence makes slow improvements on the potential. Our approach combines and generalizes the local-max-cut approaches of [ER14,ABPW17] to handle the multi-strategy case: it requires a careful definition of the matrix which captures the increase in potential, a tighter union bound on adversarial sequences, and balancing it with good enough rank bounds. We believe that the approach and notions developed herein could be of interest in addressing the smoothed complexity of other potential and/or congestion games.##
###
Sparse Tree Search Optimality Guarantees in POMDPs with Continuous Observation Spaces

2020
*
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
*

Acknowledgements

doi:10.24963/ijcai.2020/568
dblp:conf/ijcai/BoodaghiansFLMM20
fatcat:r2kdxpogo5ew5m7dxswdzwctke
*Shant**Boodaghians*and Ruta Mehta were partially supported by NSF grant CCF-1750436. ...##
###
Optimizing Black-box Metrics with Iterative Example Weighting
[article]

2021
*
arXiv
*
pre-print

Gaurush Hiranandani,

arXiv:2102.09492v2
fatcat:pji2kmya5fejflcaaaayeeladm
*Shant**Boodaghians*, Ruta Mehta, and Oluwasanmi O Koyejo. Multiclass performance metric elicitation. ... Gaurush Hiranandani,*Shant**Boodaghians*, Ruta Mehta, and Oluwasanmi Koyejo. Performance metric elicitation from pairwise classifier comparisons. ...