Filters








105 Hits in 3.8 sec

Dynamic optimization with side information [article]

Dimitris Bertsimas, Christopher McCord, Bradley Sturt
2020 arXiv   pre-print
sets in a robust optimization formulation.  ...  The proposed framework uses predictive machine learning methods (such as k-nearest neighbors, kernel regression, and random forests) to weight the relative importance of various data-driven uncertainty  ...  Examples of viable predictive models include k-nearest neighbors (kNN), kernel regression, classification and regression trees (CART), and random forests (RF).  ... 
arXiv:1907.07307v2 fatcat:i6qvkzkpv5bvpa4prmifqyg6qe

A Predictive Prescription Using Minimum Volume k-Nearest Neighbor Enclosing Ellipsoid and Robust Optimization

Shunichi Ohmori
2021 Mathematics  
The enclosing minimum volume ellipsoid that contains k-nearest neighbors of is used to form the uncertainty set for the robust optimization formulation.  ...  We propose a modeling framework that integrates machine learning and robust optimization.  ...  The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.  ... 
doi:10.3390/math9020119 fatcat:uuraqhudc5gjtpbtanqhbf2pau

Sinkhorn Distributionally Robust Optimization [article]

Jie Wang, Rui Gao, Yao Xie
2021 arXiv   pre-print
We study distributionally robust optimization with Sinkorn distance -- a variant of Wasserstein distance based on entropic regularization.  ...  Journal of Machine Learning Research ( ): -. [ ] Chen R, Paschalidis IC ( ) Selecting optimal decisions via distributionally robust nearest-neighbor regression.  ...  Statistica Neerlandica ( ): -. [ ] Chen R, Paschalidis IC ( ) A robust learning approach for regression models based on distributionally robust optimization.  ... 
arXiv:2109.11926v1 fatcat:lcwrltfisjbyznk3743vxhcsme

Distributionally Robust Learning [article]

Ruidi Chen, Ioannis Ch. Paschalidis
2021 arXiv   pre-print
robust multi-output regression and multiclass classification, (iv) optimal decision making that combines distributionally robust regression with nearest-neighbor estimation; (v) distributionally robust  ...  We consider a series of learning problems, including (i) distributionally robust linear regression; (ii) distributionally robust regression with group structure in the predictors; (iii) distributionally  ...  We are thankful to the Network Optimization and Control Lab at Boston University for providing computational resources and expertise for some of the case studies.  ... 
arXiv:2108.08993v1 fatcat:6tsadkhvnrgwtk3etkvjumillq

Distributionally Robust Optimization: A Review [article]

Hamed Rahimian, Sanjay Mehrotra
2019 arXiv   pre-print
A modeling framework, called distributionally robust optimization (DRO), has recently received significant attention in both the operations research and statistical learning communities.  ...  The concepts of risk-aversion, chance-constrained optimization, and robust optimization have developed significantly over the last decade.  ...  Deng and Sen [85] use regression models such as k-nearest-neighbors regression to learn the conditional distribution ofξ given u.  ... 
arXiv:1908.05659v1 fatcat:cliwiafz4vffvj2j3b67uix5nm

Doubly Robust Data-Driven Distributionally Robust Optimization [article]

Jose Blanchet, Yang Kang, Fan Zhang, Fei He, Zhangyi Hu
2017 arXiv   pre-print
Data-driven Distributionally Robust Optimization (DD-DRO) via optimal transport has been shown to encompass a wide range of popular machine learning algorithms.  ...  We show empirically that this additional layer of robustification, which produces a method we called doubly robust data-driven distributionally robust optimization (DD-R-DRO), allows to enhance the generalization  ...  While it is typically assumed that M and N are given, one may always resort to k-Nearest-Neighbor (k-NN) method for the generation of these sets.  ... 
arXiv:1705.07168v1 fatcat:ijt22l5ljbdtlen77fgyld2jfa

Data-driven Optimal Cost Selection for Distributionally Robust Optimization [article]

Jose Blanchet, Yang Kang, Fan Zhang, Karthyek Murthy
2019 arXiv   pre-print
, among many others, can be represented exactly as distributionally robust optimization (DRO) problems.  ...  Recently, (Blanchet, Kang, and Murhy 2016, and Blanchet, and Kang 2017) showed that several machine learning algorithms, such as square-root Lasso, Support Vector Machines, and regularized logistic regression  ...  Introduction A Distributionally Robust Optimization (DRO) problem takes the general form (1) min β max P ∈U δ E P [l (X, Y, β)] , where β is a decision variable, (X, Y ) is a random element, and l(x,  ... 
arXiv:1705.07152v3 fatcat:kz6lqvk67bdwjbbsmpanvemsgi

Careful! Training Relevance is Real [article]

Chenbo Shi, Mohsen Emadikhiav, Leonardo Lozano, David Bergman
2022 arXiv   pre-print
of the predictive models become decision variables in the optimization problem.  ...  Despite a recent surge in publications in this area, one aspect of this decision-making pipeline that has been largely overlooked is training relevance, i.e., ensuring that solutions to the optimization  ...  K-Nearest Neighbors Constraints Our second class of constraints considers the distance between the solution of the optimization model and the K closest points from the input data, denoted as its K-nearest  ... 
arXiv:2201.04429v1 fatcat:cavlflbdcrc4tlblpc2to7b5xa

Distributionally robust risk map for learning-based motion planning and control: A semidefinite programming approach [article]

Astghik Hakobyan, Insoon Yang
2021 arXiv   pre-print
robust optimization.  ...  This paper proposes a novel safety specification tool, called the distributionally robust risk map (DR-risk map), for a mobile robot operating in a learning-enabled environment.  ...  It is observed that the length to q new and the risk are bigger via other neighbors than via q nearest .  ... 
arXiv:2105.00657v1 fatcat:nx65h2hpojh6ldrnblqcr6czxq

Adversarial Regression with Doubly Non-negative Weighting Matrices [article]

Tam Le and Truyen Nguyen and Makoto Yamada and Jose Blanchet and Viet Anh Nguyen
2021 arXiv   pre-print
Many machine learning tasks that involve predicting an output response can be solved by training a weighted regression model.  ...  In this paper, we propose a novel and coherent scheme for kernel-reweighted regression by reparametrizing the sample weights using a doubly non-negative matrix.  ...  Coping with label shift via distributionally robust optimisation. arXiv preprint arXiv:2010.12230, 2020. [42] Geoffrey S Watson. Smooth regression analysis.  ... 
arXiv:2109.14875v1 fatcat:fqitdxmt2rdwdkdludfuskm4xu

Distributionally Robust Local Non-parametric Conditional Estimation [article]

Viet Anh Nguyen and Fan Zhang and Jose Blanchet and Erick Delage and Yinyu Ye
2020 arXiv   pre-print
We show that despite being generally intractable, the local estimator can be efficiently found via convex optimization under broadly applicable settings, and it is robust to the corruption and heterogeneity  ...  To alleviate these issues, we propose a new distributionally robust estimator that generates non-parametric local estimates by minimizing the worst-case conditional expected loss over all adversarial distributions  ...  Let γ be the k N -th smallest value of D X (x 0 , x i ), then β N that solves (2) recovers the k N -nearest neighbor regression estimator.  ... 
arXiv:2010.05373v1 fatcat:kc2cszu7ojf3zmwqbc5lwhlkvq

RIFLE: Robust Inference from Low Order Marginals [article]

Sina Baharlouei, Kelechi Ogudu, Sze-chuan Suen, Meisam Razaviyayn
2021 arXiv   pre-print
Our framework, RIFLE (Robust InFerence via Low-order moment Estimations), estimates low-order moments with corresponding confidence intervals to learn a distributionally robust model.  ...  We specialize our framework to linear regression and normal discriminant analysis, and we provide convergence and performance guarantees. This framework can also be adapted to impute missing data.  ...  Robust Inference via Estimating Low-order Moments RIFLE is a distributionally robust optimization (DRO) framework based on estimated low-order marginals.  ... 
arXiv:2109.00644v2 fatcat:hfupzkrlfrh37ene4tkhupedwa

On Data-Driven Prescriptive Analytics with Side Information: A Regularized Nadaraya-Watson Approach [article]

Prateek R. Srivastava, Yijie Wang, Grani A. Hanasusanto, Chin Pang Ho
2021 arXiv   pre-print
We adopt ideas from distributionally robust optimization to obtain tractable formulations.  ...  We consider generic stochastic optimization problems in the presence of side information which enables a more insightful decision.  ...  The resulting data-driven decision is shown to be consistent and asymptotically optimal, and finite-sample guarantees are developed for k-nearest neighbors (KNN)-based approaches.  ... 
arXiv:2110.04855v2 fatcat:darlvzhiqrafhmcafz3e4rsgnm

The Selective Labels Problem

Himabindu Lakkaraju, Jon Kleinberg, Jure Leskovec, Jens Ludwig, Sendhil Mullainathan
2017 Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '17  
However, there are many domains where the data is selectively labeled in the sense that the observed outcomes are themselves a consequence of the existing choices of the human decision-makers.  ...  Here we propose a novel framework for evaluating the performance of predictive models on selectively labeled data.  ...  such as gradient boosted trees, logistic regression, nearest neighbor matching based on feature similarity, propensity score matching, and doubly robust estimation to impute all the missing outcomes in  ... 
doi:10.1145/3097983.3098066 pmid:29780658 pmcid:PMC5958915 dblp:conf/kdd/LakkarajuKLLM17 fatcat:5vvf2utfijho3psczg4v7ateoi

Distributional Generalization: A New Kind of Generalization [article]

Preetum Nakkiran, Yamini Bansal
2020 arXiv   pre-print
We give empirical evidence for these conjectures across a variety of domains in machine learning, including neural networks, kernel machines, and decision trees.  ...  Decision trees . Thus, it may not be surprising that decision trees behave similarly to 1-Nearest-Neighbors.  ...  Random forests (i.e. ensembles of interpolating decision trees) [Breiman, 2001] . 3. k-nearest neighbors (roughly "ensembles" of 1-Nearest-Neighbors) [Fix and Hodges, 1951] .  ... 
arXiv:2009.08092v2 fatcat:k2addt5uwncg5l5w5clz7ecdhq
« Previous Showing results 1 — 15 out of 105 results