A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
A minimax near-optimal algorithm for adaptive rejection sampling
[article]
2018
arXiv
pre-print
We give the first theoretical lower bound for the problem of adaptive rejection sampling and introduce a new algorithm which guarantees a near-optimal rejection rate in a minimax sense. ...
Rejection Sampling is a fundamental Monte-Carlo method. ...
The work of A. ...
arXiv:1810.09390v1
fatcat:cpixwtugqrbh3jc6rb4ea3dml4
Evolutionary Generative Adversarial Networks
[article]
2018
arXiv
pre-print
and evolve a population of generators to adapt to the environment (i.e., the discriminator). ...
We also utilize an evaluation mechanism to measure the quality and diversity of generated samples, such that only well-performing generator(s) are preserved and used for further training. ...
being mistaken, i.e., M heuristic G = − 1 2 E z∼pz [log(D(G(z))]. (4) Compared to the minimax mutation, the heuristic mutation will not saturate when the discriminator rejects the generated samples. ...
arXiv:1803.00657v1
fatcat:ngoz424hcrhtxc4yddyhw4sx2e
Almost Horizon-Free Structure-Aware Best Policy Identification with a Generative Model
2019
Neural Information Processing Systems
A key feature of our analysis is that it removes all horizon dependencies in the sample complexity of suboptimal actions except for the intrinsic scaling of the value function and a constant additive term ...
the optimal action-value function to reduce the sample complexity needed to find a good policy, precisely highlighting the contribution of each state-action pair to the final sample complexity. ...
The authors are grateful to the reviewers for the high-quality reviews and suggestions. ...
dblp:conf/nips/ZanetteKB19
fatcat:hk4u3tclrffonnizgvonhkltmi
De-noising by soft-thresholding
1995
IEEE Transactions on Information Theory
Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data d i = f(t i ) + z i , i = 0 ; : : : ; n 1, t i = i=n, z i iid N(0; 1). ...
[Smooth]: With high probabilityf n is at least as smooth as f, i n a n y of a wide variety of smoothness measures. ...
The performance measure M ( ; ) is near-optimal in the following minimax sense. ...
doi:10.1109/18.382009
fatcat:qtzn3mcupncb5ldjt5glbj4ehe
Minimax-optimal classification with dyadic decision trees
2006
IEEE Transactions on Information Theory
In this paper, it is shown that a new family of decision trees, dyadic decision trees (DDTs), attain nearly optimal (in a minimax sense) rates of convergence for a broad range of classification problems ...
Index Terms-Complexity regularization, decision trees, feature rejection, generalization error bounds, manifold learning, minimax optimality, pruning, rates of convergence, recursive dyadic partitions, ...
ACKNOWLEDGMENT The authors wish to thank Rui Castro and Rebecca Willett for their helpful feedback, and Rui Castro for his insights regarding the two-sided noise condition. ...
doi:10.1109/tit.2006.871056
fatcat:7qsjlkovfndoxemzmcm2raqcb4
On the Adaptive Properties of Decision Trees
2004
Neural Information Processing Systems
In this paper we examine a decision tree based on dyadic splits that adapts to each of these conditions to achieve minimax optimal rates of convergence. ...
Decision trees are surprisingly adaptive in three important respects: They automatically (1) adapt to favorable conditions near the Bayes decision boundary; (2) focus on data distributed on lower dimensional ...
Our contribution is to demonstrate practical classifiers that adaptively attain minimax optimal rates for some of Tsybakov's and other classes. ...
dblp:conf/nips/ScottN04
fatcat:ipmbjioiajd23jwinowk6bwu2u
Learning About Learning in Games Through Experimental Control of Strategic Interdependence
2008
Social Science Research Network
We report results from an experiment in which humans repeatedly play one of two games against a computer program that follows either a reinforcement or an experience weighted attraction learning algorithm ...
These factors lead to a strong linear relationship between the humans' and algorithms' action choice proportions that is suggestive of the algorithms' best response correspondences. ...
However, we get a near opposite result in the humanalgorithm treatments. We do reject zero correlation for four out of six of the human-algorithm cases. ...
doi:10.2139/ssrn.1260867
fatcat:lvytrpohybftpaqv5mnpoeu5zi
Learning about learning in games through experimental control of strategic interdependence
2012
Journal of Economic Dynamics and Control
We report results from an experiment in which humans repeatedly play one of two games against a computer program that follows either a reinforcement or an experience weighted attraction learning algorithm ...
These factors lead to a strong linear relationship between the humans' and algorithms' action choice proportions that is suggestive of the algorithms' best response correspondences. ...
However, we get a near opposite result in the humanalgorithm treatments. We do reject zero correlation for four out of six of the human-algorithm cases. ...
doi:10.1016/j.jedc.2011.09.007
fatcat:uawniaaaxracxoqjdxfj73adoi
Robust recursive Lp estimation
1990
IEE Proceedings D Control Theory and Applications
The major limitation on outlier robustness is seen to be the requirement for convergence of the recursive minimisation. The algorithm is validated with an application in adaptive control. ...
It is shown that La estimation is minimax outlier-robust and minimax covariance-robust over the neighbourhood of exponential power distributions. Efficiency loss is negligible. ...
For this algorithm to be consistent, it may be necessary to modify <!>(x')!x near the origin, to prevent unboundedness. ...
doi:10.1049/ip-d.1990.0007
fatcat:hwrf7yamcze3lk5emieernha2e
Minimax Adaptive Spectral Estimation From an Ensemble of Signals
2006
IEEE Transactions on Signal Processing
The ratio M/n is the price to pay for data adaptive linear aggregation and is optimal, in a minimax sense. ...
As a consequence, we show that our final estimator is minimax rate adaptive, if at least two of the estimators per signal attain the optimal rate n −2α/2α+1 , for spectra belonging to a generalized Lipschitz ...
Under these ideal conditions and if all f j are minimax adaptive estimators, the average estimator is also minimax adaptive, and there is no need for a more complicated strategy. ...
doi:10.1109/tsp.2006.877639
fatcat:tvxfnh4hq5clrejigtatdjt7we
Near-minimax recursive density estimation on the binary hypercube
2008
Neural Information Processing Systems
key ways: (1) it attains near-minimax mean-squared error, and (2) the computational complexity is lower for sparser densities. ...
However, for a wide class of densities that satisfy a certain sparsity condition, our estimator runs in probabilistic polynomial time and adapts to the unknown sparsity of the underlying density in two ...
Our contribution consists in developing a thresholding density estimator that adapts to the unknown sparsity of the underlying density in two key ways: (1) it is near-minimax optimal, with the error decay ...
dblp:conf/nips/RaginskyLWS08
fatcat:wm4enq7y45etxm22rdnwdudi4y
OPTIMAL TESTING IN A FIXED-EFFECTS FUNCTIONAL ANALYSIS OF VARIANCE MODEL
2004
International Journal of Wavelets, Multiresolution and Information Processing
We adapt the optimal (minimax) hypothesis testing procedures for testing a zero signal in a Gaussian "signal plus noise" model to derive optimal (minimax) non-adaptive and adaptive hypothesis testing procedures ...
In order to shed some light on the theoretical results obtained, we carry out a simulation study to examine the finite sample performance of the proposed functional hypothesis testing procedures. ...
The authors would like to thank Claudia Angelini (CNR -Napoli) for helpful remarks and the two anonymous referees for their constructive comments. ...
doi:10.1142/s0219691304000639
fatcat:iodjn6cc3reddig36rnn3bgxkq
A Probabilistic Machine Learning Approach to Scheduling Parallel Loops with Bayesian Optimization
2020
IEEE Transactions on Parallel and Distributed Systems
BO FSS is an automatic tuning variant of the factoring self-scheduling (FSS) algorithm and is based on Bayesian optimization (BO), a black-box optimization algorithm. ...
According to the minimax regret, BO FSS shows more consistent performance than other algorithms. ...
ACKNOWLEDGMENTS The authors would like to thank the reviewers for providing precious comments enriching our work, Pedro Henrique Penna for the helpful discussions about the BinLPT scheduling algorithm, ...
doi:10.1109/tpds.2020.3046461
fatcat:fxc4d32wrba2pgoank5nep6s5a
OptGS: AnRPackage for Finding Near-Optimal Group-Sequential Designs
2015
Journal of Statistical Software
Package OptGS searches for near-optimal and balanced (i.e., one which balances more than one optimality criterion) group-sequential designs for randomized controlled trials with normally distributed outcomes ...
An optimal group-sequential design is one which controls the type-I error rate and power at a specified level, but minimizes the expected sample size of the trial when the true treatment effect is equal ...
Thomas Jaki for his helpful comments on both the R package and the manuscript. ...
doi:10.18637/jss.v066.i02
fatcat:6y2pjdun5rh6ngyd2lle7dwl2u
Adaptive multiscale detection of filamentary structures in a background of uniform random points
2006
Annals of Statistics
This algorithm therefore has an optimal detection threshold, up to a factor T_*/T_-. ...
The algorithm rejects H_0 whenever some such subgraph contains a path that connects many consecutive significant counts. ...
(This notion of adaptivity parallels the notion of adaptive near-minimaxity in nonparametric smoothing, in which a single estimator, able to perform in a near-minimax way across a whole range of different ...
doi:10.1214/009053605000000787
fatcat:xsbu4y7fw5gxpdn5hh52xhonva
« Previous
Showing results 1 — 15 out of 1,190 results