A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
On Boosting with Optimal Poly-Bounded Distributions
[chapter]
2001
Lecture Notes in Computer Science
We construct a framework which allows an algorithm to turn the distributions produced by some boosting algorithms into polynomially smooth distributions (w.r.t. the PAC oracle's distribution), with minimal ...
Our scheme allows the execution of AdaBoost in the on-line boosting mode (i.e., to perform boosting "by filtering"). ...
equivalence between poly-distribution-dependent strong PAC learning and poly-distribution-dependent weak PAC learning. ...
doi:10.1007/3-540-44581-1_32
fatcat:hcmhfchnyjc7rhtuu5krgiwfcu
Distribution-Specific Agnostic Boosting
[article]
2009
arXiv
pre-print
Conversely, our boosting algorithm gives a simple hard-core set construction with an (almost) optimal hard-core set size. ...
This allows boosting a distribution-specific weak agnostic learner to a strong agnostic learner with respect to the same distribution. ...
In order to bound the number of boosting stages we need to lower bound γ ·N h . ...
arXiv:0909.2927v1
fatcat:4ckz5ryasngmzmu4bxsbgietam
Optimally-Smooth Adaptive Boosting and Application to Agnostic Learning
[chapter]
2002
Lecture Notes in Computer Science
This allows adaptively solving problems whose solution is based on smooth boosting (like noise tolerant boosting and DNF membership learning), while preserving the original (non-adaptive) solution's complexity ...
We derive a lower bound for the final error achievable by boosting in the agnostic model and show that our algorithm actually achieves that accuracy (within a constant factor). ...
On the one hand, non-smoothness obliges to use boosting by sampling. ...
doi:10.1007/3-540-36169-3_10
fatcat:nu6xunw5jfhfxam3d32k36auka
Efficient Algorithms for Privately Releasing Marginals via Convex Relaxations
[article]
2013
arXiv
pre-print
Using private boosting we are also able to give nearly matching worst-case error bounds. Our algorithms are based on the geometric techniques of Nikolov, Talwar, and Zhang. ...
In this work we present a polynomial time algorithm that, for any distribution on marginal queries, achieves average error at most Õ(√(n) d^ k/2 /4). ...
The above approach gives us average error bounds for any distribution on queries. To get a worst case error bound, we use the Boosting for Queries framework of [15] . ...
arXiv:1308.1385v1
fatcat:mkyd5sl3hzgylethpru7kad3dy
Page 8146 of Mathematical Reviews Vol. , Issue 2004j
[page]
2004
Mathematical Reviews
Servedio, Smooth boosting and learning with mali- cious noise (473—489); Nader H. Bshouty and Dmitry Gavinsky, On boosting with optimal poly-bounded distributions (490-506); Shai Ben-David, Philip M. ...
additive models online with fast evaluating kernels (444-460); Shie Mannor and Ron Meir, Geometric bounds for generalization in boosting (461—472). ...
Efficient Algorithms for Privately Releasing Marginals via Convex Relaxations
2015
Discrete & Computational Geometry
Using private boosting we are also able to give nearly matching worst-case error bounds. Our algorithms are based on the geometric techniques of Nikolov, Talwar, and Zhang. ...
In this work we present a polynomial time algorithm that, for any distribution on marginal queries, achieves average error at mostÕ( √ nd ⌈k/2⌉ 4 ). ...
The above approach gives us average error bounds for any distribution on queries. To get a worst case error bound, we use the Boosting for Queries framework of [15] . ...
doi:10.1007/s00454-015-9678-x
fatcat:ztwwmcdminat7nmp2ubmrpixsa
Improved Distributed Approximations for Maximum Independent Set
2020
International Symposium on Distributed Computing
One may wonder whether it is possible to approximate MaxIS with high probability in fewer than poly(log log n) rounds. ...
However, it is unclear how to convert this algorithm to one that succeeds with high probability without sacrificing a large number of rounds. ...
To lower bound the size of the obtained independent set I, one therefore just needs to get a lower bound on the sum of the increment probabilities Pr[v t ∈ I|I t−1 ].
35:8 Improved Distributed Approximations ...
doi:10.4230/lipics.disc.2020.35
dblp:conf/wdag/KawarabayashiKS20
fatcat:yhe43goz2nds7a7vedkvnwx524
Boosting Variational Inference: an Optimization Perspective
[article]
2018
arXiv
pre-print
Variational inference is a popular technique to approximate a possibly intractable Bayesian posterior with a more tractable one. ...
Recently, boosting variational inference has been proposed as a new paradigm to approximate the posterior by a mixture of densities by greedily adding components to the mixture. ...
If the set A contains truncated Gaussian distributions with non-degenerate covariance matrix but with small enough determinant to perfectly approximate any density defined on a bounded support it also ...
arXiv:1708.01733v2
fatcat:35kinigowza6pbzuzvgrhwxwxu
Learning Halfspaces with Malicious Noise
[chapter]
2009
Lecture Notes in Computer Science
We give poly(n, 1/ε)-time algorithms for solving the following problems to accuracy ε: • Learning origin-centered halfspaces in R n with respect to the uniform distribution on the unit ball with malicious ...
(The best previous result was Ω(ε/(n log(n/ε)) 1/4 ).) • Learning origin-centered halfspaces with respect to any isotropic logconcave distribution on R n with malicious noise rate η = Ω(ε 3 / log 2 (n/ ...
(The extra factor of ε in the bound of Theorem 2 compared with Theorem 1 comes from the fact that the boosting algorithm constructs "1/ε-skewed" distributions.) ...
doi:10.1007/978-3-642-02927-1_51
fatcat:hk66wrjug5ebtjfsxpczwqaqmy
Martingale Boosting
[chapter]
2005
Lecture Notes in Computer Science
Martingale boosting is a simple and easily understood technique with a simple and easily understood analysis. ...
A slight variant of the approach provably achieves optimal accuracy in the presence of misclassification noise. ...
Conclusion We are working on implementing the algorithm and evaluating its performance and noise tolerance on real world data. ...
doi:10.1007/11503415_6
fatcat:aakbf45cxzfz5ign74eyjkan5y
Optimal Bounds on Approximation of Submodular and XOS Functions by Juntas
2016
SIAM journal on computing (Print)
It relies crucially on our approximation by junta result. As follows from the lower bounds in [1] both of these algorithms are close to optimal. ...
Our uniform distribution algorithm runs in time 2 1/poly(γ ) poly(n). ...
One of the key pieces of the proof is the use a "boosting lemma 1 " on down-monotone events of Goemans and Vondrak [26] . ...
doi:10.1137/140958207
fatcat:id7g5y2pfbcwjij6u2yhtishp4
Optimal Bounds on Approximation of Submodular and XOS Functions by Juntas
2013
2013 IEEE 54th Annual Symposium on Foundations of Computer Science
It relies crucially on our approximation by junta result. As follows from the lower bounds in [1] both of these algorithms are close to optimal. ...
Our uniform distribution algorithm runs in time 2 1/poly(γ ) poly(n). ...
One of the key pieces of the proof is the use a "boosting lemma 1 " on down-monotone events of Goemans and Vondrak [26] . ...
doi:10.1109/focs.2013.32
dblp:conf/focs/FeldmanV13
fatcat:wpq2nientjb2he6em6psgfwwz4
Optimal bounds on approximation of submodular and XOS functions by juntas
2014
2014 Information Theory and Applications Workshop (ITA)
It relies crucially on our approximation by junta result. As follows from the lower bounds in [1] both of these algorithms are close to optimal. ...
Our uniform distribution algorithm runs in time 2 1/poly(γ ) poly(n). ...
One of the key pieces of the proof is the use a "boosting lemma 1 " on down-monotone events of Goemans and Vondrak [26] . ...
doi:10.1109/ita.2014.6804263
dblp:conf/ita/FeldmanV14
fatcat:gxjyiofvrfdyxdgf6dfyrkdsaa
Boosting and Differential Privacy
2010
2010 IEEE 51st Annual Symposium on Foundations of Computer Science
Combining this with evolution of confidence arguments from the literature, we get stronger bounds on the expected cumulative privacy loss due to multiple mechanisms, each of which provides ε-differential ...
Given a base synopsis generator that takes a distribution on Q and produces a "weak" synopsis that yields "good" answers for a majority of the weight in Q, our Boosting for Queries algorithm obtains a ...
We say that M is a (k, λ, η, β)base synopsis generator if for any distribution D on Q, when M is activated on a database x ∈ X n and on k queries sampled independently from D, with all but β probability ...
doi:10.1109/focs.2010.12
dblp:conf/focs/DworkRV10
fatcat:figgtroohrfjvjplnf46oiygoa
Logarithmic Time One-Against-Some
[article]
2016
arXiv
pre-print
We show that several simple techniques give rise to an algorithm that can compete with one-against-all in both space and predictive power while offering exponential improvements in speed when the number ...
Compared to previous approaches, we obtain substantially better statistical performance for two reasons: First, we prove a tighter and more complete boosting theorem, and second we translate the results ...
With this criterion we are in a position to directly optimize information boosting. Definition 1. ...
arXiv:1606.04988v2
fatcat:7laxfux7djarfhwuo2orv4xbxe
« Previous
Showing results 1 — 15 out of 11,120 results