7 Hits in 4.5 sec

Sparsifying to optimize over multiple information sources: an augmented Gaussian process based algorithm

Antonio Candelieri, Francesco Archetti
2021 Structural And Multidisciplinary Optimization  
While most of the current approaches fuse the Gaussian processes (GPs) modelling each source, we propose to use GP sparsification to select only "reliable" function evaluations performed over all the sources  ...  a different cost and the level of approximation (aka fidelity) of each source can change over the search space.  ...  Acknowledgements We greatly acknowledge the DEMS Data Science Lab, Department of Economics Management and Statistics (DEMS), University of Milano-Bicocca, for supporting this work by providing computational  ... 
doi:10.1007/s00158-021-02882-7 fatcat:mr2cfcz4h5cgthohk4o6hwojsm

Green Machine Learning via Augmented Gaussian Processes and Multi-Information Source Optimization [article]

Antonio Candelieri, Riccardo Perego, Francesco Archetti
2020 arXiv   pre-print
costs and different "fidelity", typically smaller portions of a large dataset.  ...  The Augmented Gaussian Process is trained using only "reliable" information among available sources. A novel acquisition function is defined according to the Augmented Gaussian Process.  ...  Acknowledgements We greatly acknowledge the DEMS Data Science Lab, Department of Economics Management and Statistics (DEMS), for supporting this work by providing computational resources.  ... 
arXiv:2006.14233v1 fatcat:5ivwye2n3nftdp44qj2kgzpila

Cardinality Minimization, Constraints, and Regularization: A Survey [article]

Andreas M. Tillmann, Daniel Bienstock, Andrea Lodi, Alexandra Schwartz
2021 arXiv   pre-print
We survey optimization problems that involve the cardinality of variable vectors in constraints or the objective function.  ...  We provide a unified viewpoint on the general problem classes and models, and give concrete examples from diverse application fields such as signal and image processing, portfolio selection, or machine  ...  are provided for an application of variable selection in logistic regression.  ... 
arXiv:2106.09606v1 fatcat:jmsvxuqycvfz3gha4jv4f5seze

QDataset: Quantum Datasets for Machine Learning [article]

Elija Perrier, Akram Youssry, Chris Ferrie
2021 arXiv   pre-print
spectroscopy and tomography.  ...  in applied and theoretical quantum settings.  ...  Acknowledgements Research and development of the QDataSet was supported by the Centre for Quantum Software and Information at the University of Technology, Sydney.  ... 
arXiv:2108.06661v1 fatcat:ig6eijrobrgqjgsqcn7bcqpgua

Learning physics-based models from data: perspectives from inverse problems and model reduction

Omar Ghattas, Karen Willcox
2021 Acta Numerica  
In both cases, the result is a predictive model that reflects data-driven learning yet deeply embeds the underlying physics, and thus can be used for design, control and decision-making, often with quantified  ...  These fields develop formulations that integrate data into physics-based models while exploiting the fact that many mathematical models of natural and engineered systems exhibit an intrinsically low-dimensional  ...  We also gratefully acknowledge US Department of Energy grants SC0019303 and DE-SC0021239, US Air Force Office of Scientific Research grants FA9550-21-1-0084 and FA9550-17-1-0195, US Advanced Research Projects  ... 
doi:10.1017/s0962492921000064 fatcat:olerbfrqqvfi5m33txfwblzltu

Deep Neural Mobile Networking [article]

Chaoyun Zhang
2020 arXiv   pre-print
This makes monitoring and managing the multitude of network elements intractable with existing tools and impractical for traditional machine learning algorithms that rely on hand-crafted feature engineering  ...  increasingly complex, as these struggle to accommodate tremendous data traffic demands generated by ever-more connected devices that have diverse performance requirements in terms of throughput, latency, and  ...  A closer look at Fig. 6.10 reveals that training with the CE loss function on traffic demand ratios leads to better estimates than using Regression or MSE.  ... 
arXiv:2011.05267v1 fatcat:yz2zp5hplzfy7h5kptmho7mbhe

Training dynamics of neural language models [article]

Naomi Saphra, University Of Edinburgh, Adam Lopez, Timothy Hospedales
Using synthetic data and measuring feature interactions, we also discover that hierarchical repres [...]  ...  This framing shows us how structural patterns and linguistic properties are gradually built up, revealing more about why LSTM models learn so effectively on language data.  ...  at low values (e.g., ReLUs).  ... 
doi:10.7488/era/1421 fatcat:adqneqxil5eijesi4mkkk3uh4y