Filters








278 Hits in 4.1 sec

Kernel LMS algorithm with forward-backward splitting for dictionary learning

Wei Gao, Jie Chen, Cedric Richard, Jianguo Huang, Remi Flamary
2013 2013 IEEE International Conference on Acoustics, Speech and Signal Processing  
A characteristics of kernel-based techniques is that they deal with kernel expansions whose number of terms is equal to the number of input data, making them unsuitable for online applications.  ...  It is surprising to note that most existing strategies for dictionary update can only incorporate new elements into the dictionary.  ...  Forward-backward splitting is an efficient method for minimizing empirical risk with sparse regularization, which was originally derived for offline learning.  ... 
doi:10.1109/icassp.2013.6638763 dblp:conf/icassp/GaoCRHF13 fatcat:u63v6wxmhnb3tnekwhmjsiyozi

Online dictionary learning for kernel LMS. Analysis and forward-backward splitting algorithm [article]

Wei Gao and Jie Chen and Cédric Richard and Jianguo Huang
2013 arXiv   pre-print
We introduce a kernel least-mean-square algorithm with L1-norm regularization to automatically perform this task.  ...  Adaptive filtering algorithms operating in reproducing kernel Hilbert spaces have demonstrated superiority over their linear counterpart for nonlinear system identification.  ...  KLMS ALGORITHM WITH FORWARD-BACKWARD SPLITTING We shall now introduce a KLMS-type algorithm based on forward-backward splitting, which can automatically update the dictionary in an online way by discarding  ... 
arXiv:1306.5310v1 fatcat:r3uiai6qubaw5gdfgnidi5chwq

Online Dictionary Learning for Kernel LMS

2014 IEEE Transactions on Signal Processing  
A subgradient approach was considered to accomplish this task, which contrasts with the more efficient forward-backward splitting algorithm recommended in [33], [34].  ...  The optimization procedures consist of subgradient descent [42], projection onto the 1 -ball [43], or online forward-backward splitting [44] .  ...  KLMS ALGORITHM WITH FORWARD-BACKWARD SPLITTING We shall now introduce a KLMS-type algorithm based on forward-backward splitting, which can automatically update the dictionary in an online way by discarding  ... 
doi:10.1109/tsp.2014.2318132 fatcat:jxl5374myjcbxifxh2griwnuie

Anisotropic Gaussian kernel adaptive filtering by Lie-group dictionary learning

Tomoya Wada, Kosuke Fukumori, Toshihisa Tanaka, Simone Fiori, Jie Zhang
2020 PLoS ONE  
The present paper proposes a novel kernel adaptive filtering algorithm, where each Gaussian kernel is parameterized by a center vector and a symmetric positive definite (SPD) precision matrix, which is  ...  the increase of dimensionality of the dictionary.  ...  Acknowledgments This work is supported by JSPS KAKENHI Grant Number 17H01760 and National Center for Theoretical Sciences (NCTS), Taiwan, through a 2016 "Research in Pairs" program.  ... 
doi:10.1371/journal.pone.0237654 pmid:32797071 fatcat:gxb7fqipl5cdrmj42yxj7z4mwu

Adaptive Random Fourier Features Kernel LMS [article]

Wei Gao, Jie Chen, Cédric Richard, Wentao Shi, Qunfei Zhang
2022 arXiv   pre-print
We propose the adaptive random Fourier features Gaussian kernel LMS (ARFF-GKLMS).  ...  Simulation results confirm that the proposed algorithm achieves a performance improvement in terms of convergence rate, error at steady-state and tracking ability over other kernel adaptive filters with  ...  We will also apply a forward-backward splitting framework to eliminate the features with negligible contribution to the estimation performance.  ... 
arXiv:2207.07236v1 fatcat:qh4drkihvzdfhjamuuc35vaxdy

Beyond Human-Level Accuracy: Computational Challenges in Deep Learning [article]

Joel Hestness, Newsha Ardalani, Greg Diamos
2019 arXiv   pre-print
Our characterization reveals an important segmentation of DL training challenges for recurrent neural networks (RNNs) that contrasts with prior studies of deep convolutional networks.  ...  Deep learning (DL) research yields accuracy and product improvements from both model architecture changes and scale: larger data sets and models, and more computation.  ...  The backprop for matrix operations usually has twice the algorithmic FLOPs as the forward traversal.  ... 
arXiv:1909.01736v1 fatcat:rynrlgxtznamzgjz5bsry4xp6i

Masked Language Modeling for Proteins via Linearly Scalable Long-Context Transformers [article]

Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, David Belanger, Lucy Colwell, Adrian Weller
2020 arXiv   pre-print
It is also backwards-compatible with pre-trained regular Transformers. We demonstrate its effectiveness on the challenging task of protein sequence modeling and provide detailed theoretical analysis.  ...  In response, solutions that exploit the structure and sparsity of the learned attention matrix have blossomed.  ...  Acknowledgements We thank Afroz Mohiuddin, Wojciech Gajewski, Nikita Kitaev, and Lukasz Kaiser for multiple discussions on the Transformer.  ... 
arXiv:2006.03555v3 fatcat:omyixqddrzbkplyuxqvs5suhyu

Recognizing and Interpreting Sign Language Gesture for Human Robot Interaction

Shekhar Singh, Akshat Jain, Deepak Kumar
2012 International Journal of Computer Applications  
This paper describes a sign language gesture based recognition, interpreting and imitation learning system using Indian Sign Language for performing Human Robot Interaction in real time.  ...  It permits us to construct a convenient sign language gesture based communication with humanoid robot.  ...  (LM) as the training algorithm.  ... 
doi:10.5120/8247-1758 fatcat:5n3zwmlrfzgdxj4nyu7xyilrum

Sparse Nonlinear MIMO Filtering and Identification [chapter]

G. Mileounis, N. Kalouptsidis
2013 Signals and Communication Technology  
In this chapter system identification algorithms for sparse nonlinear multi input multi output (MIMO) systems are developed.  ...  These algorithms are potentially useful in a variety of application areas including digital transmission systems incorporating power amplifier(s) along with multiple antennas, cognitive processing, adaptive  ...  Acknowledgements This research has been co-financed by the European Union (European Social Fund -ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National  ... 
doi:10.1007/978-3-642-38398-4_7 fatcat:a7awuvl2prfvfkylo2za6cprz4

Design and Implementation of a Multithreaded Virtual Machine for Executing Linear Logic Programs

Flavio Cruz, Ricardo Rocha, Seth Copen Goldstein
2014 Proceedings of the 16th International Symposium on Principles and Practice of Declarative Programming - PPDP '14  
execution, and database organization for efficient fact insertion, lookup and deletion.  ...  Linear Meld is a concurrent forward-chaining linear logic programming language where logical facts can be asserted and retracted in a structured way.  ...  Acknowledgments We thank the anonymous reviewers for their insightful comments and suggestions that helped us improve the quality of the paper.  ... 
doi:10.1145/2643135.2643150 dblp:conf/ppdp/CruzRG14 fatcat:bosr2enpb5fi3gpsx24ek7jj5e

Opinion-Mining on Marglish and Devanagari Comments of YouTube Cookery Channels Using Parametric and Non-Parametric Learning Models

Sonali Rajesh Shah, Abhishek Kaushik, Shubham Sharma, Janice Shah
2020 Big Data and Cognitive Computing  
Several machine-learning models are applied on the dataset along with 3 different vectorizing techniques.  ...  Multilayer Perceptron and Bernoulli Naïve Bayes are considered to be the best performing algorithms. 10-fold cross-validation and statistical testing was also carried out on the dataset to confirm the  ...  Bi-LSTM is a special type of LSTM in which the information is available from forward to backward and backward to forward that is in both directions. Bi-LSTM outperforms with the accuracy of 90%.  ... 
doi:10.3390/bdcc4010003 fatcat:giqp6kkxgjfqhpeb55vxjeflcu

Bregmanized Nonlocal Regularization for Deconvolution and Sparse Reconstruction

Xiaoqun Zhang, Martin Burger, Xavier Bresson, Stanley Osher
2010 SIAM Journal of Imaging Sciences  
We propose two algorithms based on Bregman iteration and operator splitting technique for nonlocal TV regularization problems.  ...  The convergence of the algorithms is analyzed and applications to deconvolution and sparse reconstruction are presented.  ...  Martin Burger and Stanley Osher thank Fondazione CIME for a summer school in stimulating atmosphere, initiating a part of this project.  ... 
doi:10.1137/090746379 fatcat:ylqkqhag3jdw7hrnexmajchnim

Optimization methods for MR image reconstruction (long version) [article]

Jeffrey A Fessler
2019 arXiv   pre-print
The development of compressed sensing methods for magnetic resonance (MR) image reconstruction led to an explosion of research on models and optimization algorithms for MR imaging (MRI).  ...  This review paper summarizes several key models and optimization algorithms for MR image reconstruction, including both the type of methods that have FDA approval for clinical use, as well as more recent  ...  The classical approach for (7) is the iterative soft thresholding algorithm (ISTA) [65] , also known as the proximal gradient method (PGM) [66] and proximal forward-backward splitting [67] , having  ... 
arXiv:1903.03510v2 fatcat:djzypf3obzg6vgdcb4fnb53zku

Deep Speech: Scaling up end-to-end speech recognition [article]

Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates and Andrew Y. Ng
2014 arXiv   pre-print
We do not need a phoneme dictionary, nor even the concept of a "phoneme."  ...  We present a state-of-the-art speech recognition system developed using end-to-end deep learning.  ...  Acknowledgments We are grateful to Jia Lei, whose work on DL for speech at Baidu has spurred us forward, for his advice and support throughout this project.  ... 
arXiv:1412.5567v2 fatcat:cfqvlbcrbbh23ingwt4zmnz2ka

Generalized Gaussian Kernel Adaptive Filtering [article]

Tomoya Wada, Kosuke Fukumori, Toshihisa Tanaka, Simone Fiori
2018 arXiv   pre-print
The kernel adaptive filtering algorithm is established together with a l1-regularized least squares to avoid overfitting and the increase of dimensionality of the dictionary.  ...  Different from conventional kernel adaptive filters, the proposed regressor is a superposition of Gaussian kernels with all different parameters, which makes such regressor more flexible.  ...  However, since Θ (n) is a convex function, the forward-backward splitting [26] may be applied.  ... 
arXiv:1804.09348v1 fatcat:zymbrs3bxfhihjfevwv3xsr67u
« Previous Showing results 1 — 15 out of 278 results