Filters








394 Hits in 7.3 sec

Multichannel Speech Enhancement [chapter]

Lino Garcia, Soledad Torres-Guijarro
2008 New Developments in Robotics Automation and Control  
Another class of algorithms employ the sparseness of speech signal to design better inversion strategies and identify the minimum norm solution.  ...  A predictor structure is a linear weighting of some finite number of past input samples used to estimate or predict the current input sample.  ... 
doi:10.5772/6291 fatcat:gympckrwszhnng5xkvkr3omb7q

Multichannel Online Dereverberation based on Spectral Magnitude Inverse Filtering [article]

Xiaofei Li, Laurent Girin, Sharon Gannot, Radu Horaud
2019 arXiv   pre-print
Finally, the inverse filtering is applied to the STFT magnitude of the microphone signals, obtaining an estimate of the STFT magnitude of the source signal.  ...  Instead of the complex-valued CTF convolution model, we use a nonnegative convolution model between the STFT magnitude of the source signal and the CTF magnitude, which is just a coarse approximation of  ...  In the linear-predictive multi-input equalization (LIME) algorithm [21] , the speech source signal is estimated as the multichannel linear prediction residual, which however is excessively whitened.  ... 
arXiv:1812.08471v2 fatcat:ovqtilym65atnos6kg4qdrafv4

Speech dereverberation with multi-channel linear prediction and sparse priors for the desired signal

Ante Jukic, Toon van Waterschoot, Timo Gerkmann, Simon Doclo
2014 2014 4th Joint Workshop on Hands-free Speech Communication and Microphone Arrays (HSCMA)  
In this paper we focus on a blind method for speech dereverberation based on the multi-channel linear prediction model in the short-time Fourier domain, where the parameters of the model are estimated  ...  Experimental evaluation, employing a parametric complex generalized Gaussian prior for the desired speech signal, shows that instrumentally predicted speech quality can be improved compared to the conventional  ...  One class of methods is based on blind identification of the room impulse responses (RIRs) between the source and the microphone array, followed by multichannel equalization [2] .  ... 
doi:10.1109/hscma.2014.6843244 dblp:conf/hscma/JukicWGD14 fatcat:eskfpefh5fa5hl7pyws3wujgly

Separation of Moving Sound Sources Using Multichannel NMF and Acoustic Tracking [article]

Joonas Nikunen, Aleksandr Diment, Tuomas Virtanen
2017 arXiv   pre-print
In this paper we propose a method for separation of moving sound sources.  ...  We propose a novel multichannel NMF model with time-varying mixing of the sources denoted by spatial covariance matrices (SCM) and provide update equations for optimizing model parameters minimizing squared  ...  Multichannel NMF model with time-variant mixing The proposed algorithm uses multichannel NMF for source spectrogram estimation and it is based on alternating estimation of the source magnitude spectrogramŝ  ... 
arXiv:1710.10005v1 fatcat:wi5t6xm3s5affh7aayvxsfwnb4

Multichannel Online Dereverberation based on Spectral Magnitude Inverse Filtering

Xiaofei Li, Laurent Girin, Sharon Gannot, Radu Horaud
2019 IEEE/ACM Transactions on Audio Speech and Language Processing  
In the linear-predictive multi-input equalization (LIME) algorithm [21] , the speech source signal is estimated as the multichannel linear prediction residual, which however is excessively whitened.  ...  To avoid such whitening effect, a prediction delay is used in the delayed linear prediction techniques [22] , [23] .  ... 
doi:10.1109/taslp.2019.2919183 fatcat:hx3wtab7b5fuhkn3amb657stmm

Immersive Audio Schemes

Yiteng Huang, Jingdong Chen, Jacob Benesty
2011 IEEE Signal Processing Magazine  
This need offers great opportunities for multichannel acoustic and speech signal processing, and for new ideas of voice communication services infrastructure.  ...  fter more than a century of accelerated advances in telecommunication technologies, people are no longer satisfied with talking to someone over a long distance and in real time.  ...  He is a coauthor and coeditor of six books and was an associate editor for IEEE Signal Processing Letters and EURASIP Journal on Applied Signal Processing.  ... 
doi:10.1109/msp.2010.938754 fatcat:hupszrodcvak5byb4ds7xygcf4

A Geometric Approach to Sound Source Localization from Time-Delay Estimates

Xavier Alameda-Pineda, Radu Horaud
2014 IEEE/ACM Transactions on Audio Speech and Language Processing  
This paper addresses the problem of sound-source localization from time-delay estimates using arbitrarily-shaped non-coplanar microphone arrays.  ...  The geometric analysis, stemming from the direct acoustic propagation model, leads to necessary and sufficient conditions for a set of time delays to correspond to a unique position in the source space  ...  In the following, we delineate a criterion for multichannel time delay estimation (Section V-A), which will subsequently be used in Section V-B to cast the TDE-SSL problem into a non-linear multivariate  ... 
doi:10.1109/taslp.2014.2317989 fatcat:kphdvb7zobdtjnjb6452n7tzeu

Computational methods for underdetermined convolutive speech localization and separation via model-based sparse component analysis

Afsaneh Asaei, Hervé Bourlard, Mohammad J. Taghizadeh, Volkan Cevher
2016 Speech Communication  
A model-based sparse component analysis framework is formulated for sparse reconstruction of the speech spectra in a reverberant acoustic resulting in joint localization and separation of the individual  ...  In this paper, the problem of speech source localization and separation from recordings of convolutive underdetermined mixtures is studied.  ...  remarks to improve the quality and clarity of the manuscript.  ... 
doi:10.1016/j.specom.2015.07.002 fatcat:vud5w5xj7vaendljl5oi6wlz3m

Blind Separation and Dereverberation of Speech Mixtures by Joint Optimization

Takuya Yoshioka, Tomohiro Nakatani, Masato Miyoshi, Hiroshi G. Okuno
2011 IEEE Transactions on Audio, Speech, and Language Processing  
This paper proposes a method for performing blind source separation (BSS) and blind dereverberation (BD) at the same time for speech mixtures.  ...  The proposed method uses a network, in which dereverberation and separation networks are connected in tandem, to estimate source signals.  ...  Chen, and the anonymous reviewers for the valuable comments and helpful suggestions.  ... 
doi:10.1109/tasl.2010.2045183 fatcat:jmwbwpavlvbx3eu2rq6bu67dzu

Digital Signal Processing for Hearing Instruments

Heinz G. Göckler, Henning Puder, Hugo Fastl, Sven Erik Nordholm, Torsten Dau, Walter Kellermann
2009 EURASIP Journal on Advances in Signal Processing  
The database provides a tool for the evaluation of multichannel hearing aid algorithms in hearing aid research.  ...  , advanced digital filtering and filter banks, as well as speech processing and enhancement devised for modern speech transmission and recognition.  ...  This approach exploits sparseness of speech signals, that is, the fact that speech may not be present at all times and at all frequencies, which is not taken into account by a typical SDW-MWF.  ... 
doi:10.1155/2009/898576 fatcat:boqoodm23jdc3jhgbiux2m4kce

Multi-channel speech processing architectures for noise robust speech recognition: 3rd CHiME challenge results

Lukas Pfeifenberger, Tobias Schrank, Matthias Zohrer, Martin Hagmuller, Franz Pernkopf
2015 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)  
We study variants of beamformers used for pre-processing multi-channel speech recordings. In particular, we investigate three variants of generalized sidelobe canceller (GSC) beamformers, i.e.  ...  DPFs outperformed our baseline systems significantly when measuring the overall perceptual score (OPS) and the perceptual evaluation of speech quality (PESQ).  ...  It can be detected for each time frame l by searching over a small set of possible delays using τ OP T (l) = arg max τ 1 K K k=0 ξ τ (k, l).  ... 
doi:10.1109/asru.2015.7404830 dblp:conf/asru/PfeifenbergerSZ15 fatcat:mapxngwq7jeirlxsn5zpvn4r4i

2020 Index IEEE Signal Processing Letters Vol. 27

2020 IEEE Signal Processing Letters  
., +, LSP 2020 2159-2163 Robust Multipath Time-Delay Estimation of Broadband Source Using a Vertical Line Array in Deep Water. A Sparse Conjugate Gradient Adaptive Filter.  ...  Kim, Y., +, LSP 2020 Robust Multipath Time-Delay Estimation of Broadband Source Using a Vertical Line Array in Deep Water.  ... 
doi:10.1109/lsp.2021.3055468 fatcat:wfdtkv6fmngihjdqultujzv4by

Design of large polyphase filters in the Quadratic Residue Number System

Gian Carlo Cardarilli, Alberto Nannarelli, Yann Oster, Massimo Petricca, Marco Re
2010 2010 Conference Record of the Forty Fourth Asilomar Conference on Signals, Systems and Computers  
In this work, we assume few number of sources are generating the multichannel observations based on a linear mixture model.  ...  source separation, if the number of sources is more than the number of mixtures, for estimation of the mixing matrix and consequently the sources, often a single dominant component sparse component analysis  ... 
doi:10.1109/acssc.2010.5757589 fatcat:ccxnu5owr5fyrcjcqukumerueq

Multichannel Identification and Nonnegative Equalization for Dereverberation and Noise Reduction Based on Convolutive Transfer Function

Xiaofei Li, Sharon Gannot, Laurent Girin, Radu Horaud
2018 IEEE/ACM Transactions on Audio Speech and Language Processing  
With the decrease of SNR, WPE also has a larger spectral distortion due to the inaccuracy of linear prediction and spectral subtraction.  ...  The multichannel impulse response dataset [62] was measured using a 8-channel linear microphone array in the speech and acoustic lab of Bar Ilan University.  ... 
doi:10.1109/taslp.2018.2839362 fatcat:afl2zvzgtzddpnvxspioj2wsnu

Acoustic Self-Awareness of Autonomous Systems in a World of Sounds

Alexander Schmidt, Heinrich W. Lollmann, Walter Kellermann
2020 Proceedings of the IEEE  
As a first step, the state of the art of relevant generic techniques for acoustic scene analysis (ASA) is reviewed, i.e., source localization and the various facets of signal enhancement, including spatial  ...  Not only generic methods for robust source localization and signal extraction but also specific models and estimation methods for ego-noise based on various learning techniques are discussed.  ...  As a second class of BSS algorithms, binary masking (see [110] - [112] ) relies on the assumption that audio signals are sparse in the time-frequency plane so that, for any time-frequency bin tf , only  ... 
doi:10.1109/jproc.2020.2977372 fatcat:immaqhfnkna6xdwj3dqlh7qewi
« Previous Showing results 1 — 15 out of 394 results