Filters








9,997 Hits in 5.8 sec

Rectified Factor Networks [article]

Djork-Arné Clevert, Andreas Mayr, Thomas Unterthiner, Sepp Hochreiter
2015 arXiv   pre-print
RFN learning is a generalized alternating minimization algorithm derived from the posterior regularization method which enforces non-negative and normalized posterior means.  ...  We propose rectified factor networks (RFNs) to efficiently construct very sparse, non-linear, high-dimensional representations of the input.  ...  Rectified Factor Network Our goal is to construct representations of the input that (1) are sparse, (2) are non-negative, (3) are non-linear, (4) use many code units, and (5) model structures in the  ... 
arXiv:1502.06464v2 fatcat:vkbozsi2hbfz7jc2nm2dochg6q

Locally Masked Convolution for Autoregressive Models [article]

Ajay Jain and Pieter Abbeel and Deepak Pathak
2020 arXiv   pre-print
For tasks such as image completion, these models are unable to use much of the observed context.  ...  Our code is available at https://ajayjain.github.io/lmconv.  ...  Acknowledgements We thank Paras Jain, Nilesh Tripuraneni, Joseph Gonzalez and Jonathan Ho for helpful discussions, and reviewers for helpful suggestions.  ... 
arXiv:2006.12486v3 fatcat:wbz2rnvhtjcepja7vfifcp4xey

Blind Recognition of Forward Error Correction Codes Based on a Depth Distribution Algorithm

Fan Mei, Hong Chen, Yingke Lei
2021 Symmetry  
However, based on information asymmetry, the receiver cannot know the types of channel coding previously used in non-cooperative systems such as cognitive radio and remote sensing of communication.  ...  The proposed algorithm can effectively recognize linear block codes, convolutional codes, and Turbo codes under a low error probability level, and has a higher robustness to noise transmission environment  ...  Acknowledgments: The authors would like to express their thanks to Lu Chen from the National University of Defense Technology for her valuable comments on this paper.  ... 
doi:10.3390/sym13061094 fatcat:uyrxsf3lfnbwjkjzo3jwpjoguy

On Convolutional Coupled Codes

Slim Chaoui
2004 AEU - International Journal of Electronics and Communications  
Weight Enumerating Function LLR Logarithmic Likelihood ratio MAP Maximum A Posteriori ML Maximum Likelihood NSC Non Systematic Convolutional PCC Parallel Concatenated Codes PSTC Partial Systematic Turbo  ...  Since the interleaving length is normally very large, maximum likelihood decoding would be of astronomical complexity and is, thus out of question.  ...  The BCJR Algorithm A.1 Theoretical Description The BCJR algorithm [2] is an optimal algorithm for calculating the a posteriori probabilities of symbols encoded with a convolutional code and transmitted  ... 
doi:10.1078/1434-8411-54100229 fatcat:hcwdc3xotrdetbjnz6kp2qscuu

Discriminative Dictionary Learning based on Statistical Methods [article]

G.Madhuri, Atul Negi
2021 arXiv   pre-print
There is scope for improvement in this direction and many researchers have used statistical methods to design dictionaries for classification.  ...  We use a simple three layer Multi Layer Perceptron with back-propagation training as a classifier with those sparse codes as input.  ...  Non-negative Matrix Factorization (NMF) has been used for dimension reduction in [1] .  ... 
arXiv:2111.09027v1 fatcat:zzdz2m3rvfauxjw4eu2mgpk5mu

Underdetermined Blind Source Separation Combining Tensor Decomposition and Nonnegative Matrix Factorization

Yuan Xie, Kan Xie, Junjie Yang, Shengli Xie
2018 Symmetry  
In the proposed algorithm, we first employ tensor decomposition to estimate the mixing matrix, and NMF source model is used to estimate the source spectrogram factors.  ...  In this paper, we propose an effective algorithm that combines tensor decomposition and nonnegative matrix factorization (NMF).  ...  Additionally, NMF aims to decompose a non-negative factor matrix into the product of two low-rank non-negative factor matrices [12, 13] .  ... 
doi:10.3390/sym10100521 fatcat:iosydmmpofh4dk7snafmlenc6m

Non-negative mixtures [chapter]

M.D. Plumbley, A. Cichocki, R. Bro
2010 Handbook of Blind Source Separation  
In this chapter we discuss some algorithms for the use of non-negativity constraints in unmixing problems, including positive matrix factorization (PMF) [71], non-negative matrix factorization (NMF), and  ...  their combination with other unmixing methods such as non-negative ICA and sparse non-negative matrix factorization.  ...  Acknowledgements MP is supported by EPSRC Leadership Fellowship EP/G007144/1 and EU FET-Open project FP7-ICT-225913 "Sparse Models, Algorithms, and Learning for Large-scale data (SMALL)".  ... 
doi:10.1016/b978-0-12-374726-6.00018-7 fatcat:tmy5i4jpvnb27fwjlxnpiw3dxi

Non-Negative Matrix Factorization with Sparsity Learning for Single Channel Audio Source Separation [chapter]

Bin Gao, W.L. Woo
2012 Independent Component Analysis for Audio and Biosignal Applications  
Adaptive sparsity two-dimensional non-negative matrix factorization In this section, we derive a new factorization method termed as the adaptive sparsity twodimensional non-negative matrix factorization  ...  Conclusion The chapter has presented an adaptive strategy to sparsifying the non-negative matrix factorization.  ... 
doi:10.5772/48068 fatcat:2kouiovn2fbu7ecmeg3p42mfti

Space-Time Bit-Interleaved Coded Modulation for OFDM Systems

I. Lee, A.M. Chan, C.-E.W. Sundberg
2004 IEEE Transactions on Signal Processing  
Bit-interleaved coded modulation gives good diversity gains with higher order modulation schemes using well-known binary convolutional codes on a single transmit and receive antenna link.  ...  By using orthogonal frequency division multiplexing (OFDM), wideband transmission can be achieved over frequency-selective fading radio channels without adaptive equalizers.  ...  Since E is a non-negative definite Hermitian matrix, we have an eigendecomposition as E = P 3 3 3 3P, where P is a unitary matrix, and 3 3 3 is a real diagonal matrix.  ... 
doi:10.1109/tsp.2003.822350 fatcat:vqydv6chwff75aevhcdpwucvv4

Superpixel Tensor Pooling for Visual Tracking using Multiple Midlevel Visual Cues Fusion

Chong Wu, Le Zhang, Jiawang Cao, Hong Yan
2019 IEEE Access  
Then for each superpixel, it encodes different midlevel cues including HSI color, RGB color, and spatial coordinates into a histogram matrix to construct a new feature space.  ...  Then the incremental positive and negative subspaces learning is performed.  ...  If the max likelihood > 0, store the tensor J of max likelihood into the updating sequence and use PF to draw some negative samples to update negative subspace and if algorithm reaches update rate u, use  ... 
doi:10.1109/access.2019.2946939 fatcat:iv5mxdk26bgqzkuc6scrjr3zmy

Matrix factorisation methods applied in microarray data analysis

Andrew V. Kossenkov, Michael F. Ochs
2010 International Journal of Data Mining and Bioinformatics  
Broadly speaking, these methods define a series of mathematical approaches to matrix factorization with different approaches to the fitting of the model to the data.  ...  Nonnegative Matrix Factorization First introduced by Lee and Seung for feature recognition in images [33] , non-negative matrix factorization (NMF) was adopted for analysis of gene expression data by  ...  The key assumption is non-negativity of the underlying signals, which is reasonable for single color expression data and non-log transformed ratio expression data, since there are no negative copies of  ... 
doi:10.1504/ijdmb.2010.030968 pmid:20376923 pmcid:PMC2998896 fatcat:nhoq7fem65bk7ptjnxiwoqtpqe

2020 Index IEEE Transactions on Signal Processing Vol. 68

2020 IEEE Transactions on Signal Processing  
., One-Step Prediction for Discrete Time-Varying Nonlinear Systems With Unknown Inputs and Correlated Noises; TSP  ...  ., +, TSP 2020 3312-3324 Quaternion Non-Negative Matrix Factorization: Definition, Uniqueness, and Algorithm.  ...  ., +, TSP 2020 1229-1242 Quaternion Non-Negative Matrix Factorization: Definition, Uniqueness, and Algorithm.  ... 
doi:10.1109/tsp.2021.3055469 fatcat:6uswtuxm5ba6zahdwh5atxhcsy

ROSE: a deep learning based framework for predicting ribosome stalling [article]

Sai Zhang, Hailin Hu, Jingtian Zhou, Xuan He, Tao Jiang, Jianyang Zeng
2016 bioRxiv   pre-print
ROSE provides an effective index to estimate the likelihood of translational pausing at codon resolution and understand diverse putative regulatory factors of ribosome stalling.  ...  We present a deep learning based framework, called ROSE, to accurately predict ribosome stalling events in translation elongation from coding sequences based on high-throughput ribosome profiling data.  ...  After convolution and rectification, we reduce the dimension of matrix Y using the max pooling operation, which computes the maximum value within a scanning window of size three and step size two.  ... 
doi:10.1101/067108 fatcat:a2rjx5sem5hppfoe3x337gscfu

Learning Deep Representation Without Parameter Inference for Nonlinear Dimensionality Reduction [article]

Xiao-Lei Zhang
2014 arXiv   pre-print
Restricted Boltzman machine, sparse coding, regularized auto-encoders, and convolutional neural networks are pioneering building blocks of deep learning.  ...  have different numbers of hidden units; (ii) the model of each expert is a k-center clustering, whose k-centers are only uniformly sampled examples, and whose output (i.e. the hidden units) is a sparse code  ...  Suppose we have a data set {x i } n i=1 , substituting equation (2) to (1) and taking the negative logarithm of (1) results in the following maximum likelihood estimation problem: min {{hv,i} n i=1  ... 
arXiv:1308.4922v2 fatcat:3j3yiq6lwfbbtlixqc52kkml64

The k-tied Normal Distribution: A Compact Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks [article]

Jakub Swiatkowski, Kevin Roth, Bastiaan S. Veeling, Linh Tran, Joshua V. Dillon, Jasper Snoek, Stephan Mandt, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin
2020 arXiv   pre-print
For a variety of deep Bayesian neural networks trained using Gaussian mean-field variational inference, we find that the posterior standard deviations consistently exhibit strong low-rank structure after  ...  Variational Bayesian Inference is a popular methodology for approximating posterior distributions over Bayesian neural network weights.  ...  We are then left with what is called the negative Evidence Lower Bound (negative ELBO): L q = D KL [q θ (w)||p(w)] − E q [log p(y|w, x)]. (2) In practice, the expectation of the log-likelihood p(y|w, x  ... 
arXiv:2002.02655v2 fatcat:wvstcfucfnfw7oibg4isnw4hte
« Previous Showing results 1 — 15 out of 9,997 results