A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2016; you can also visit the original URL.
The file type is application/pdf
.
Filters
Batch and online learning algorithms for nonconvex neyman-pearson classification
2011
ACM Transactions on Intelligent Systems and Technology
We describe and evaluate two algorithms for Neyman-Pearson (NP) classification problem which has been recently shown to be of a particular importance for bipartite ranking problems. ...
We investigated batch algorithm based on DC programming and stochastic gradient method well suited for large scale datasets. Empirical evidences illustrate the potential of the proposed methods. ...
CONCLUSION We have proposed a batch approach suited for kernel machines and online learning strategy for large datasets to tackle non-convex Neyman-Pearson classification problem. ...
doi:10.1145/1961189.1961200
fatcat:yykk3w5gc5a37cjmvwrlxh3n7i
A Neural Network Approach for Online Nonlinear Neyman-Pearson Classification
[article]
2020
arXiv
pre-print
As a result, we obtain an expedited online adaptation and powerful nonlinear Neyman-Pearson modeling. ...
We propose a novel Neyman-Pearson (NP) classifier that is both online and nonlinear as the first time in the literature. ...
Hence, we construct our method called "NP-NN" in Algorithm 1 that can be used in real time for online nonlinear Neyman-Pearson classification. ...
arXiv:2006.08001v2
fatcat:wk6hjb5uirfo5ecijzneib2ede
A Neural Network Approach for Online Nonlinear Neyman-Pearson Classification
2020
IEEE Access
Hence, we construct our method called "NP-NN" in Algorithm 1 that can be used in real time for online nonlinear Neyman-Pearson classification. ...
SLFN FOR ONLINE NONLINEAR NP CLASSIFICATION In order to learn nonlinear Neyman-Pearson classification boundaries, we use a single hidden layer feed forward neural network (SLFN), illustrated in 1, that ...
doi:10.1109/access.2020.3039724
fatcat:4yossxy3afhqlkuy5mkcchmbom
Minimax and Neyman-Pearson Meta-Learning for Outlier Languages
[article]
2021
arXiv
pre-print
To increase its robustness to outlier languages, we create two variants of MAML based on alternative criteria: Minimax MAML reduces the maximum risk across languages, while Neyman-Pearson MAML constrains ...
We report gains for their average and minimum performance across low-resource languages in zero- and few-shot settings, compared to joint multi-source transfer and vanilla MAML. ...
Acknowledgements We thank the reviewers for their valuable feedback. Rahul Aralikatte and Anders Søgaard are funded by a Google Focused Research Award. ...
arXiv:2106.01051v1
fatcat:7mtaaat3e5auvowumypc3vzr5u
Solving Stochastic Optimization with Expectation Constraints Efficiently by a Stochastic Augmented Lagrangian-Type Algorithm
[article]
2022
arXiv
pre-print
Under mild conditions, we show that this algorithm exhibits O(K^-1/2) expected convergence rates for both objective reduction and constraint violation if parameters in the algorithm are properly chosen ...
This algorithm can be roughly viewed as a hybrid of stochastic approximation and the traditional proximal method of multipliers. ...
Acknowledgments The authors would like to thank the anonymous reviewers and the associate editor for the valuable comments and suggestions that helped us to greatly improve the quality of the paper. ...
arXiv:2106.11577v3
fatcat:ilrz7ikbljgallrpda6iwrova4
Proximally Constrained Methods for Weakly Convex Optimization with Weakly Convex Constraints
[article]
2019
arXiv
pre-print
Optimization models with non-convex constraints arise in many tasks in machine learning, e.g., learning with fairness constraints or Neyman-Pearson classification with non-convex loss. ...
Each subproblem can be solved by various algorithms for strongly convex optimization. ...
Although the algorithm by [53] also works well in these instances, it doesn't have any theoretical guarantee for nonconvex constrained optimization problems. ...
arXiv:1908.01871v2
fatcat:jb6hdexc5rhc5gqvfw3lntotei
Kernel-Based Learning for Statistical Signal Processing in Cognitive Radio Networks: Theoretical Foundations, Example Applications, and Future Directions
2013
IEEE Signal Processing Magazine
He has published widely in the areas of signal processing for wireless communications and networking. ...
The authors would like to thank the reviewers for their constructive comments and helpful suggestions and Dr. Nansai Hu China. He is also the cochair of the IEEE Nanjing Section. ...
online learning [9] . ...
doi:10.1109/msp.2013.2251071
fatcat:gsz5mc6nbjcdzezobmqo5wmx2q
Persistency of Excitation for Robustness of Neural Networks
[article]
2019
arXiv
pre-print
When an online learning algorithm is used to estimate the unknown parameters of a model, the signals interacting with the parameter estimates should not decay too quickly for the optimal values to be discovered ...
While training a neural network, the iterative optimization algorithm involved also creates an online learning problem, and consequently, correct estimation of the optimal parameters requires persistent ...
trade-off between Type-I and Type-II errors, the ROC curve, and the Neyman-Pearson rule (Poor, 2013; Keener, 2011) . ...
arXiv:1911.01043v1
fatcat:5my5dsn2bvgadnadmx4gdrqj5y
Patterns, predictions, and actions: A story about machine learning
[article]
2021
arXiv
pre-print
This graduate textbook on machine learning tells a story of how patterns in data support predictions and consequential actions. ...
Self-contained introductions to causality, the practice of causal inference, sequential decision making, and reinforcement learning equip the reader with concepts and tools to reason about actions and ...
The Neyman-Pearson Lemma The Neyman-Pearson Lemma, a fundamental lemma of decision theory, will be an important tool for us to establish three important facts. ...
arXiv:2102.05242v2
fatcat:wy47g4fojnfuxngklyewtjtqdi
A Comprehensive Survey on Local Differential Privacy
2020
Security and Communication Networks
Furthermore, we present current research circumstances on LDP including the private statistical learning/inferencing, private statistical data analysis, privacy amplification techniques for LDP, and some ...
Finally, we identify future research directions and open challenges for LDP. ...
Moreover, they proposed an Online Convex Optimization framework that is used to design and analyze the algorithms for training machine learning models. Hoeven et al. ...
doi:10.1155/2020/8829523
fatcat:xjk3vgyambb5xioc2q5hyr2hua
IEEE Robotics & Automation Society
2012
IEEE robotics & automation magazine
The fusion center then combines these messages to arrive at an optimal decision in the Neyman-Pearson framework. ...
An offline (batch) algorithm that combines particle filtering
and the expectation maximization is introduced for the identification
of such systems. ...
doi:10.1109/mra.2012.2230568
fatcat:33actbknxrel3jnag2kx7cncem
IEEE Robotics & Automation Society
2011
IEEE robotics & automation magazine
The fusion center then combines these messages to arrive at an optimal decision in the Neyman-Pearson framework. ...
An offline (batch) algorithm that combines particle filtering
and the expectation maximization is introduced for the identification
of such systems. ...
doi:10.1109/mra.2011.941112
fatcat:owvu2behc5hulpcae2dp5myigm
[IEEE Robotics & Automation Society]
2012
IEEE robotics & automation magazine
The fusion center then combines these messages to arrive at an optimal decision in the Neyman-Pearson framework. ...
An offline (batch) algorithm that combines particle filtering
and the expectation maximization is introduced for the identification
of such systems. ...
doi:10.1109/mra.2012.2229854
fatcat:rjrxtwk4jbcgjpvjdad6mougsq
IEEE Robotics & Automation Society
2011
IEEE robotics & automation magazine
The fusion center then combines these messages to arrive at an optimal decision in the Neyman-Pearson framework. ...
An offline (batch) algorithm that combines particle filtering
and the expectation maximization is introduced for the identification
of such systems. ...
doi:10.1109/mra.2011.943480
fatcat:d2wvloyv6jcbzp2yathd52mx2u