Filters








1,609 Hits in 4.7 sec

From Predictions to Decisions: Using Lookahead Regularization [article]

Nir Rosenfeld, Sophie Hilgard, Sai Srivatsa Ravindranath, David C. Parkes
2020 arXiv   pre-print
Machine learning is a powerful tool for predicting human-related outcomes, from credit scores to heart attack risks.  ...  For this, we introduce look-ahead regularization which, by anticipating user actions, encourages predictive models to also induce actions that improve outcomes.  ...  To solve this, lookahead regularization makes use of an uncertainty model that provides confidence intervals around decision outcomes.  ... 
arXiv:2006.11638v2 fatcat:gcnmhr5r4nf63pgjzu2fdg4o3u

Optimal Sparse Decision Trees [article]

Xiyang Hu, Cynthia Rudin, Margo Seltzer
2020 arXiv   pre-print
The problem that has plagued decision tree algorithms since their inception is their lack of optimality, or lack of guarantees of closeness to optimality: decision tree algorithms are often greedy or myopic  ...  Hardness of decision tree optimization is both a theoretical and practical obstacle, and even careful mathematical programming approaches have not been able to solve these problems efficiently.  ...  We use the same notation as in the original BinOCT formulation [20] . Figure 7 shows the trees generated by regularized BinOCT and OSDT when using the same regularization parameter λ = 0.007.  ... 
arXiv:1904.12847v5 fatcat:a2dz5i2szffgrd5jbirf3qaaom

Scaling up Heuristic Planning with Relational Decision Trees

T. De la Rosa, S. Jimenez, R. Fuentetaja, D. Borrajo
2011 The Journal of Artificial Intelligence Research  
The first one consists of using the resulting classifier as an action policy. The second one consists of applying the classifier to generate lookahead states within a Best First Search algorithm.  ...  Particularly, we define the task of learning search control for heuristic planning as a relational classification task, and we use an off-the-shelf relational classification tool to address this learning  ...  lookahead states from decision trees.  ... 
doi:10.1613/jair.3231 fatcat:mhei6rvzebgnjnfazakhwf3vfi

Online convex optimization with ramp constraints

Masoud Badiei, Na Li, Adam Wierman
2015 2015 54th IEEE Conference on Decision and Control (CDC)  
We study a novel variation of online convex optimization where the algorithm is subject to ramp constraints limiting the distance between consecutive actions.  ...  Our contribution is results providing asymptotically tight bounds on the worstcase performance, as measured by the competitive difference, of a variant of Model Predictive Control termed Averaging Fixed  ...  a regularizer to the objective.  ... 
doi:10.1109/cdc.2015.7403279 dblp:conf/cdc/BadieiLW15 fatcat:oyip6twq5rhk5gwmgqedheqsjy

Real time optimization of systems with fast and slow dynamics using a lookahead strategy

Joakim Rostrup Andersen, Thiago Lima Silva, Lars Imsland, Alexey Pavlov
2020 2020 59th IEEE Conference on Decision and Control (CDC)  
In this paper, we propose to extend RTO with a lookahead strategy by introducing a predictor to capture the effect of changing the current controls on the long-term objective.  ...  The existing dynamic optimal control methods might become computationally infeasible due to the fine discretization required to capture the fast dynamics.  ...  The computational efficiency of the lookahead method allows its use in a RTO fashion in daily operations, while, in a similar manner, using the multiple shooting approach in a Model Predictive Control  ... 
doi:10.1109/cdc42340.2020.9304460 fatcat:hwz3u4qzlrc3zag77vvxxcimvi

Generalized and Scalable Optimal Sparse Decision Trees [article]

Jimmy Lin, Chudi Zhong, Diane Hu, Cynthia Rudin, Margo Seltzer
2020 arXiv   pre-print
Decision tree optimization is notoriously difficult from a computational perspective but essential for the field of interpretable machine learning.  ...  These new techniques have the potential to trigger a paradigm shift where it is possible to construct sparse decision trees to efficiently optimize a variety of objective functions without relying on greedy  ...  lookahead.  ... 
arXiv:2006.08690v3 fatcat:56636lgkevgllnqax4nashpmz4

On neuro-wavelet modeling

F. Murtagh, J.L. Starck, O. Renaud
2004 Decision Support Systems  
Experimentally, we show that multiresolution approaches can outperform the traditional single resolution approach to modeling and prediction.  ...  We survey a number of applications of the wavelet transform in time series prediction.  ...  Acknowledgements We are grateful to M. Savage for the futures data.  ... 
doi:10.1016/s0167-9236(03)00092-7 fatcat:biuu5nabingvndr3qwnxf6q2sy

Markov Decision Process for Video Generation [article]

Vladyslav Yushchenko, Nikita Araslanov, Stefan Roth
2019 arXiv   pre-print
To address this, we reformulate the problem of video generation as a Markov Decision Process (MDP).  ...  The underlying idea is to represent motion as a stochastic process with an infinite forecast horizon to overcome the fixed length limitation and to mitigate the presence of temporal artifacts.  ...  As regularization, we only use weight decay of 10 −5 .  ... 
arXiv:1909.12400v1 fatcat:iqf5jzevj5bbjoulgx2oxsgxry

LazyBum: Decision tree learning using lazy propositionalization [article]

Jonas Schouterden, Jesse Davis, Hendrik Blockeel
2019 arXiv   pre-print
LazyBum interleaves OneBM's feature construction method with a decision tree learner. This learner both uses and guides the propositionalization process.  ...  The resulting table can next be used by any propositional learner. This approach makes it possible to apply a wide variety of learning methods to relational data.  ...  To compare predictive accuracy for these methods with LazyBum, we learn a single decision tree on their output tables. For OneBM and MODL, we used WEKA's C4.5 decision tree implementation.  ... 
arXiv:1909.05044v1 fatcat:cshmqw6wmfefxhpjfmrccd4kua

Thinking fast and slow: Optimization decomposition across timescales

Gautam Goel, Niangjun Chen, Adam Wierman
2017 2017 IEEE 56th Annual Conference on Decision and Control (CDC)  
The framework is analogous to how the network utility maximization framework uses optimization decomposition to distribute a global control problem across independent controllers, each of which solves  ...  react slowly using a more global view.  ...  The term (B f ) −1 (y t − Ay t−1 ) f acts as a regularizer, penalizing choices that differ from the previous choice y t−1 under the dynamics of A.  ... 
doi:10.1109/cdc.2017.8263834 dblp:conf/cdc/GoelCW17 fatcat:54unxwqucfhhxbjcweiwsvvbna

LL(*)

Terence Parr, Kathleen Fisher
2011 Proceedings of the 32nd ACM SIGPLAN conference on Programming language design and implementation - PLDI '11  
At parse-time, decisions gracefully throttle up from conventional fixed k ≥ 1 lookahead to arbitrary lookahead and, finally, fail over to backtracking depending on the complexity of the parsing decision  ...  This paper introduces the LL(*) parsing strategy and an associated grammar analysis algorithm that constructs LL(*) parsing decisions from ANTLR grammars.  ...  The key idea behind LL(*) parsers is to use regular-expressions rather than a fixed constant or backtracking with a full parser to do lookahead.  ... 
doi:10.1145/1993498.1993548 dblp:conf/pldi/ParrF11 fatcat:uqitvlxdfrhs3j2sd4grb4p47u

Model-based Reinforcement Learning for Semi-Markov Decision Processes with Neural ODEs [article]

Jianzhun Du, Joseph Futoma, Finale Doshi-Velez
2020 arXiv   pre-print
Our models accurately characterize continuous-time dynamics and enable us to develop high-performing policies using a small amount of data.  ...  We present two elegant solutions for modeling continuous-time dynamics, in a novel model-based reinforcement learning (RL) framework for semi-Markov decision processes (SMDPs), using neural ordinary differential  ...  Acknowledgement We thank Andrew Ross, Weiwei Pan, Melanie Pradier and other members from Harvard Data to Actionable Knowledge lab for helpful discussion and feedbacks.  ... 
arXiv:2006.16210v2 fatcat:ap5ed27aqra37g62ghebwudy6q

Information Theory of Decisions and Actions [chapter]

Naftali Tishby, Daniel Polani
2010 Perception-Action Cycle  
Using a graphical model, we derive a recursive Bellman optimality equation for information measures, in analogy to Reinforcement Learning; from this, we obtain new algorithms for calculating the optimal  ...  In particular, decision and action sequences turn out to be directly analogous to codes in communication, and their complexity -the minimal number of (binary) decisions required for reaching a goal -directly  ...  Acknowledgement The authors would like to thank Jonathan Rubin for carrying out the simulations and the preparation of the corresponding diagrams.  ... 
doi:10.1007/978-1-4419-1452-1_19 fatcat:erwda7i7gneuzoa4vfp2jj4dhu

The Decision to Lever

Robert M. Anderson, Stephen W. Bianchi, Lisa R. Goldberg
2013 Social Science Research Network  
We present a simple model that completely describes the performance of a levered strategy and facilitates the decision to lever.  ...  Empirically, this covariance tends to be large in magnitude despite the fact that its underlying correlation is close to zero.  ...  This decomposition can be used to support decisions about when and how to lever.  ... 
doi:10.2139/ssrn.2292557 fatcat:s4lms4wpzve2ldovv46vzunvjq

Optimizing Biomanufacturing Harvesting Decisions under Limited Historical Data [article]

Bo Wang, Wei Xie, Tugce Martagan, Alp Akcay, Bram van Ravenstein
2021 arXiv   pre-print
A fermentation process uses living cells with complex biological mechanisms, and this leads to high variability in the process outputs.  ...  Our case studies at MSD Animal Health demonstrate that the proposed model and solution approach improve the harvesting decisions in real life by achieving substantially higher average output from a fermentation  ...  We also thank Oscar Repping from MSD Animal Health for his continuous support during the collaboration.  ... 
arXiv:2101.03735v3 fatcat:4ayztzm6iza4zatvvo7ckgq6mi
« Previous Showing results 1 — 15 out of 1,609 results