Filters








7 Hits in 3.1 sec

HyperPower: Power- and Memory-Constrained Hyper-Parameter Optimization for Neural Networks [article]

Dimitrios Stamoulis, Ermao Cai, Da-Cheng Juan, Diana Marculescu
2017 arXiv   pre-print
In this work, we propose HyperPower, a framework that enables efficient Bayesian optimization and random search in the context of power- and memory-constrained hyper-parameter optimization for NNs running  ...  While selecting the hyper-parameters of Neural Networks (NNs) has been so far treated as an art, the emergence of more complex, deeper architectures poses increasingly more challenges to designers and  ...  We employ hyper-parameter optimization on variants of the AlexNet network for MNIST and CIFAR-10, with six and thirteen hyper-parameters respectively.  ... 
arXiv:1712.02446v1 fatcat:5yxj3wlokjeabiju2t4fpqvxma

Hardware-Aware Machine Learning: Modeling and Optimization [article]

Diana Marculescu, Dimitrios Stamoulis, Ermao Cai
2018 arXiv   pre-print
Furthermore, DL practitioners are challenged with the task of designing the DNN model, i.e., of tuning the hyper-parameters of the DNN architecture, while optimizing for both accuracy of the DL model and  ...  Therefore, state-of-the-art methodologies have proposed hardware-aware hyper-parameter optimization techniques.  ...  To enable a priori power and memory constraint evaluations that are decoupled from the expensive objective evaluation, HyperPower models power and memory consumption of a network as a function of the J  ... 
arXiv:1809.05476v1 fatcat:gmsfdh6rijhklipkzlcj23wl4i

Carbontracker: Tracking and Predicting the Carbon Footprint of Training Deep Learning Models [article]

Lasse F. Wolff Anthony, Benjamin Kanding, Raghavendra Selvan
2020 arXiv   pre-print
We hope this will promote responsible computing in ML and encourage research into energy-efficient deep neural networks.  ...  In this work, we present Carbontracker, a tool for tracking and predicting the energy and carbon footprint of training DL models.  ...  The authors also thank the anonymous reviewers and early users of carbontracker for their insightful feedback.  ... 
arXiv:2007.03051v1 fatcat:wdgszvdg4nbwpmjwtbeku7kq3q

On-Device Machine Learning: An Algorithms and Learning Theory Perspective [article]

Sauptik Dhar, Junyao Guo, Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah
2020 arXiv   pre-print
This survey finds a middle ground by reformulating the problem of on-device learning as resource constrained learning where the resources are compute and memory.  ...  Given this surge in interest, a comprehensive survey of the field from a device-agnostic perspective sets the stage for both understanding the state-of-the-art and for identifying open challenges and future  ...  Tegra TX1 Caffe inference: power, memory layer configuration hyper-parameters linear RMSPE < 7% Yang et al  ... 
arXiv:1911.00623v2 fatcat:fokmxmy3x5g7ne7yggm4zpyqta

A survey on multi-objective hyperparameter optimization algorithms for Machine Learning [article]

Alejandro Morales-Hernández and Inneke Van Nieuwenhuyse and Sebastian Rojas Gonzalez
2021 arXiv   pre-print
Several methods have been developed to perform HPO; most of these are focused on optimizing one performance measure (usually an error-based measure), and the literature on such single-objective HPO problems  ...  , and approaches using a mixture of both.  ...  Hyperpower: Power-and memory-constrained hyper-parameter optimization for neu- ral networks. 2018 design, automation & test in europe conference & exhibition (date) (pp. 19–24).  ... 
arXiv:2111.13755v2 fatcat:q2qtofihtzev5mose5aj7odfzm

Hardware-Aware AutoML for Efficient Deep Learning Applications

Dimitrios Stamoulis
2020
Deep Neural Networks (DNNs) have been traditionally designed by human experts in a painstaking and expensive process, dubbed by many researchers to be more of an art than science.  ...  Moreover, we formulate the design of adaptive DNNs as an AutoML task and we jointly solve for the DNN architectures and the adaptive execution scheme, reducing energy consumption by up to 6?  ...  Hardware-constrained Bayesian optimization The power P(·) and the memory M(·) consumption of a DNN (during inference) have been traditionally seen as key impediments to deploying DNNs to low-power devices  ... 
doi:10.1184/r1/12026319 fatcat:gnu6k3gxwfgdhpfrjsvmxcpul4

Handling uncertainty in hydrologic analysis and drought risk assessment using Dempster-Shafer theory

Amin H. Zargar Yaghoobi
2012
Four DST combination rules are used for conflict-resolution and results unanimously indicate a high possibility of drought.  ...  In order to handle uncertainty in DRA, this thesis uses the Dempster-Shafer theory (DST) which provides a unified platform for modeling and propagating uncertainty in the forms of variability, conflict  ...  In DSmT the power set is called the hyperpower set denoted by . The number of outcomes from a hyper-power set is considerably higher than a power set with the same cardinality.  ... 
doi:10.14288/1.0073510 fatcat:r672v5ytmvcvpfcnyoysy7vfhi