Filters








21,219 Hits in 3.2 sec

Active Bayesian Optimization: Minimizing Minimizer Entropy [article]

Il Memming Park, Marcel Nassar, Mijung Park
2012 arXiv   pre-print
The ultimate goal of optimization is to find the minimizer of a target function.However, typical criteria for active optimization often ignore the uncertainty about the minimizer.  ...  We implement a tractable approximation of the criterion and demonstrate that it obtains the global minimizer accurately compared to conventional Bayesian optimization criteria.  ...  Under the active Bayesian optimization framework, the goal is to query the oracle, potentially noisy, as few times as possible while quickly gaining knowledge of the minimizer x * = arg min x f (x).  ... 
arXiv:1202.2143v1 fatcat:2i35qakzcvanvh7le4lp7fj2ti

Adaptive Decision Making via Entropy Minimization [article]

Armen E. Allahverdyan, Aram Galstyan, Ali E. Abbas, Zbigniew R. Struzik
2018 arXiv   pre-print
We explore consequences of making this choice via entropy minimization, which is argued to be a specific example of risk-aversion.  ...  How can/ought our agent choose an optimal (in a technical sense) mixed action?  ...  This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via 2017-17071900005.  ... 
arXiv:1803.06638v2 fatcat:tvybni3bmrctvp4ui5hnhqs5b4

Drift correction in localization microscopy using entropy minimization [article]

Jelmer Cnossen, Tao Ju Cui, Chirlmin Joo, Carlas S Smith
2021 bioRxiv   pre-print
Here, we minimize a bound on the entropy of the obtained localizations to efficiently compute a precise drift estimate.  ...  We show that RCC has sub-optimal precision and bias, which leaves room for improvement.  ...  To our knowledge, further improvements of post-process drift correction over RCC have been limited: [9] demonstrated Bayesian sample drift inference (BaSDI), an optimal approach using Bayesian statistics  ... 
doi:10.1101/2021.03.30.437682 fatcat:etcds5w7nngwng5ieiepfifrvi

Task-Oriented Active Sensing via Action Entropy Minimization

Tipakorn Greigarn, Michael S. Branicky, M. Cenk Cavusoglu
2019 IEEE Access  
In active sensing, sensing actions are typically chosen to minimize the uncertainty of the state according to some information-theoretic measure such as entropy, conditional entropy, mutual information  ...  This paper presents a new task-oriented active sensing scheme, where the task is taken into account in sensing action selection by choosing sensing actions that minimize the uncertainty in future task-related  ...  Typically an active sensing algorithm chooses a sensing action to optimize some information measure.  ... 
doi:10.1109/access.2019.2941706 pmid:31737464 pmcid:PMC6857841 fatcat:za2762u2mrelffqlo7thu5igqq

Relative entropy minimizing noisy non-linear neural network to approximate stochastic processes [article]

Mathieu N. Galtier and Camille Marini and Gilles Wainrib and Herbert Jaeger
2014 arXiv   pre-print
The method unifies and generalizes two known separate modeling approaches, Echo State Networks (ESN) and Linear Inverse Modeling (LIM), under the common principle of relative entropy minimization.  ...  Recall we intend to tune the noise to minimize the relative entropy.  ...  Training to minimize relative entropy This section explains the training of the connection matrices W 00 and W 01 so that the distance between the neural network and the target time series is minimized  ... 
arXiv:1402.1613v2 fatcat:mve3kcgo6ndibbvpjaidx5ts2m

Bipartite pattern discovery by entropy minimization-based multiple local alignment

C. Bi
2004 Nucleic Acids Research  
We developed a novel bipartite algorithm, bipartite pattern discovery (Bipad), which produces a mathematical model based on information maximization or Shannon's entropy minimization principle, for discovery  ...  The entropy of individual half-site motifs is minimized, rather than the combined entropy of both half-site motifs in a single integrated model.  ...  First, we minimize the total entropy present in both half sites as an integral model (rather than as independent half sites).  ... 
doi:10.1093/nar/gkh890 fatcat:n2ppi4jv7fecjb2cyteafb6hcu

Bipartite pattern discovery by entropy minimization-based multiple local alignment

C. Bi
2004 Nucleic Acids Research  
We developed a novel bipartite algorithm, bipartite pattern discovery (Bipad), which produces a mathematical model based on information maximization or Shannon's entropy minimization principle, for discovery  ...  The entropy of individual half-site motifs is minimized, rather than the combined entropy of both half-site motifs in a single integrated model.  ...  First, we minimize the total entropy present in both half sites as an integral model (rather than as independent half sites).  ... 
doi:10.1093/nar/gkh825 pmid:15388800 pmcid:PMC521645 fatcat:7se7xrwiyzhqppgrzhlustgbkq

Near-Optimal Bayesian Active Learning with Noisy Observations [article]

Daniel Golovin and Andreas Krause and Debajyoti Ray
2013 arXiv   pre-print
We develop EC2, a novel, greedy active learning algorithm and prove that it is competitive with the optimal policy, thus obtaining the first competitiveness guarantees for Bayesian active learning with  ...  We tackle the fundamental problem of Bayesian active learning with noise, where we need to adaptively select from a number of expensive tests in order to identify an unknown hypothesis sampled from a known  ...  Bayesian Active Learning in the Noiseless Case In the Bayesian active learning problem, we would like to distinguish among a given set of hypotheses H = {h 1 , . . . , h n } by performing tests from a  ... 
arXiv:1010.3091v2 fatcat:pa5eblqijffr5fny5p3mhzcm4a

Active inference, Bayesian optimal design, and expected utility [article]

Noor Sajid, Lancelot Da Costa, Thomas Parr, Karl Friston
2021 arXiv   pre-print
In this chapter, we describe how active inference combines Bayesian decision theory and optimal Bayesian design principles under a single imperative to minimize expected free energy.  ...  When removing prior outcomes preferences from expected free energy, active inference reduces to optimal Bayesian design, i.e., information gain maximization.  ...  We review these aspects of active inference and show that the minimization of expected free energy subsumes Bayesian decision theory and optimal Bayesian design principles as special cases.  ... 
arXiv:2110.04074v1 fatcat:6mzmokuquvco3g2bngd2kwor3u

Relative entropy minimizing noisy non-linear neural network to approximate stochastic processes

Mathieu N. Galtier, Camille Marini, Gilles Wainrib, Herbert Jaeger
2014 Neural Networks  
The method unifies and generalizes two known separate modeling approaches, Echo State Networks (ESN) and Linear Inverse Modeling (LIM), under the common principle of relative entropy minimization.  ...  Recall we intend to tune the noise to minimize the relative entropy.  ...  Training to minimize relative entropy This section explains the training of the connection matrices W 00 and W 01 so that the distance between the neural network and the target time series is minimized  ... 
doi:10.1016/j.neunet.2014.04.002 pmid:24815743 fatcat:swg3g2u26ve3dgosrdnqguxye4

Autonomous Searching for a Diffusive Source Based on Minimizing the Combination of Entropy and Potential Energy

Cheng Song, Yuyao He, Xiaokang Lei
2019 Sensors  
It results in a faster effective search strategy by which the sensor determines its actions by minimizing the free energy rather than only the entropy in traditional infotaxis.  ...  In this paper, from the context of exploration-exploitation balance, a novel search scheme based on minimizing free energy that combines the entropy and the potential energy is proposed.  ...  The minimization of entropy drove the sensor to gather information and actively update the probability distribution of the source.  ... 
doi:10.3390/s19112465 fatcat:ptzyds2hlzeudoab5qjw2pwbmy

Driver Fatigue Classification With Independent Component by Entropy Rate Bound Minimization Analysis in an EEG-Based System

Rifai Chai, Ganesh R. Naik, Tuan Nghia Nguyen, Sai Ho Ling, Yvonne Tran, Ashley Craig, Hung T. Nguyen
2017 IEEE journal of biomedical and health informatics  
Index Terms-electroencephalography (EEG), driver fatigue, autoregressive (AR) model, independent component analysis, entropy rate bound minimization, Bayesian neural network.  ...  The system uses independent component by entropy rate bound minimization analysis (ERBM-ICA) for the source separation, autoregressive (AR) modeling for the features extraction and Bayesian neural network  ...  Later, the algorithm is optimized to obtain a new W, which minimizes the mutual information rate.  ... 
doi:10.1109/jbhi.2016.2532354 pmid:26915141 fatcat:fb4vsneoovepxk6rljgdcburhi

Near-optimal Bayesian Active Learning with Correlated and Noisy Tests [article]

Yuxin Chen, S. Hamed Hassani, Andreas Krause
2016 arXiv   pre-print
We consider the Bayesian active learning and experimental design problem, where the goal is to learn the value of some unknown target variable through a sequence of informative, noisy tests.  ...  , and an active preference learning task via pairwise comparisons.  ...  The problem of optimal information gathering has been studied in the context of active learning (Dasgupta, 2004a; Settles, 2012) , Bayesian experimental design (Chaloner & Verdinelli, 1995) , policy  ... 
arXiv:1605.07334v2 fatcat:kq6qvgvjsnb4bci2yk5wwgtniu

A New Robust Regression Method Based on Minimization of Geodesic Distances on a Probabilistic Manifold: Application to Power Laws

Geert Verdoolaege
2015 Entropy  
The method is based on minimization of the Rao geodesic distance on a probabilistic manifold.  ...  In the experiments below, we employed a classic active-set algorithm to carry out the optimization [21] .  ...  Hence, the optimization procedure involves matching not only y n with bx n , but also σ 2 obs with σ 2 y + β 2 σ 2 x .  ... 
doi:10.3390/e17074602 fatcat:z64bas7p6jcaxpuiqnbes26nqi

Bayesian active learning for optimization and uncertainty quantification in protein docking [article]

Yue Cao, Yang Shen
2019 arXiv   pre-print
Results: We introduce a novel algorithm, Bayesian Active Learning (BAL), for optimization and UQ of such black-box functions and flexible protein docking.  ...  BAL directly models the posterior distribution of the global optimum (or native structures for protein docking) with active sampling and posterior estimation iteratively feeding each other.  ...  In Materials and Methods, we first give a mathematical formulation for the optimization and the UQ, then introduce our Bayesian active learning (BAL) that iteratively update sampling and posterior estimation  ... 
arXiv:1902.00067v1 fatcat:7uhkja5jdbcivansyx7jqeu2ii
« Previous Showing results 1 — 15 out of 21,219 results