Filters








26,026 Hits in 4.2 sec

Speech Data Compression Using Vector Quantization

H. B. Kekre, Tanuja K. Sarode
2008 Zenodo  
Mostly transforms are used for speech data compressions which are lossy algorithms.  ...  Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear.  ...  In this paper we are proposing this algorithm for speech data compression along with LBG and KPE and comparative performance of these algorithms is given.  ... 
doi:10.5281/zenodo.1085721 fatcat:tih63u2we5aahecbvbtiw57hkm

Page 63 of Dsh Abstracts Vol. 10, Issue 1 [page]

1970 Dsh Abstracts  
Detection of ventricular landmarks by two dimensional ultrasonography. 7. Neurosurg. Psychiat., 31, 1968, 232-244. d Neu rol.  ...  The advantages and limitations of two-dimensional ultrasonic measurement are discussed in relation to one-dimensional ultrasonography. 22 references.—R. T. Wertz 286. EWING,G.D.  ... 

Using Holographically Compressed Embeddings in Question Answering [article]

Salvador E. Barbosa
2020 arXiv   pre-print
This research employs holographic compression of pre-trained embeddings, to represent a token, its part-of-speech, and named entity type, in the same dimension as representing only the token.  ...  In question answering, parts-of-speech and named entity types are important, but encoding these attributes in neural models expands the size of the input.  ...  This paper describes experiments carried out to evaluate the use of HRR to compress each token, its part-of-speech, and its named entity type (if any), into a single 300-dimensional vector.  ... 
arXiv:2007.07287v1 fatcat:dt255ezmu5fijlhks33jdl3kua

Speech signal compression and encryption based on sudoku, fuzzy C-means and threefish cipher

Iman Qays Abduljaleel, Amal Hameed Khaleel
2021 International Journal of Power Electronics and Drive Systems (IJPEDS)  
The resulting compressed speech is then used as an input in a scrambling algorithm that was proposed on two levels.  ...  The speech signal is then compressed, after removing low and less intense frequencies, to produce a well compressed speech signal and ensure the quality of the speech.  ...  There are two main techniques used to compress the data: lossless compression and lossy compression [4] .  ... 
doi:10.11591/ijece.v11i6.pp5049-5059 fatcat:4a37mhkh3re7lhvih5pqyf7qmi

Combined Speech Compression and Encryption using Contourlet Transform and Compressive Sensing

Maher K.M., Ali M.
2016 International Journal of Computer Applications  
Speech compression is the process of Converting human speech signals into a form that is compact and is reliable for communication and storage by reducing the size of data without losing quality of the  ...  Spars structure) that is one of the most important aspect of the compressive sensing theory.. General Terms Speech compression, speech encryption.  ...  Therefore, in traditional methods of compression and encryption separated two algorithms did the two processes. This paper will explain how to do the two processes in onestep (single algorithm).  ... 
doi:10.5120/ijca2016909295 fatcat:ico7xw7ypnamtj4rvazc5gbw7i

Adaptive Speech Compression Based on Discrete Wave Atoms Transform

Bousselmi Souha, Aloui Nouredine, Cherif Adnane
2016 International Journal of Electrical and Computer Engineering (IJECE)  
<p>This paper proposes a new adaptive speech compression system based on discrete wave atoms transform.  ...  The results of current work are compared with wavelet based compression by using objective criteria, namely CR, SNR, PSNR and NRMSE.  ...  In literature, the speech compression algorithms are split into two main categories: lossless compression and lossy compression.  ... 
doi:10.11591/ijece.v6i5.10826 fatcat:egiukzr3iva6ha4jtawmsheb4q

A source and channel-coding framework for vector-based data hiding in video

D. Mukherjee, Jong Jin Chae, S.K. Mitra
2000 IEEE transactions on circuits and systems for video technology (Print)  
The quality of the extracted video and speech is shown for varying compression ratios of the host video.  ...  The host video with the embedded data is H.263 compressed, before attempting retrieval of the hidden video and speech from the reconstructed video.  ...  General Comments In this section, we present the implementation details of two example applications: hiding video in video and hiding speech in video [4] , both of which allow extraction of hidden data  ... 
doi:10.1109/76.845009 fatcat:ni7urxqwbrertn66imijjza674

Auditory model based modified MFCC features

Saikat Chatterjee, W. Bastiaan Kleijn
2010 2010 IEEE International Conference on Acoustics, Speech and Signal Processing  
Along with the use of an optimized static function to compress a set of filter bank energies, we propose to use a memory-based adaptive compression function to incorporate the behavior of human auditory  ...  We show that a significant improvement in automatic speech recognition (ASR) performance is obtained for any environmental condition, clean as well as noisy.  ...  In our method, the FBEs are processed through the use of two compression stages: static and adaptive.  ... 
doi:10.1109/icassp.2010.5495557 dblp:conf/icassp/ChatterjeeK10 fatcat:a6yegkko7jaqbomc7lixziprvi

Modulation frequency features for phoneme recognition in noisy speech

Sriram Ganapathy, Samuel Thomas, Hynek Hermansky
2009 Journal of the Acoustical Society of America  
These features are then used for machine recognition of phonemes in telephone speech.  ...  These sub-band envelopes are derived from auto-regressive modelling of Hilbert envelopes of the signal in critical bands, processed by both a static (logarithmic) and a dynamic (adaptive loops) compression  ...  The authors would like to thank the Medical Physics group at the Carl von Ossietzky-Universitat Oldenburg for code fragments implementing adaptive compression loops.  ... 
doi:10.1121/1.3040022 pmid:19173383 fatcat:txal3v2qeffkvedbld53wvzffu

A Deep 2D Convolutional Network for Waveform-Based Speech Recognition

Dino Oglic, Zoran Cvetkovic, Peter Bell, Steve Renals
2020 Interspeech 2020  
Several comparative studies of automatic and human speech recognition suggest that this information loss can adversely affect the robustness of ASR systems.  ...  The first layer of the network decomposes waveforms into frequency sub-bands, thereby representing them in a structured high-dimensional space.  ...  two dimensional convolutions performs on par with that approach.  ... 
doi:10.21437/interspeech.2020-1870 dblp:conf/interspeech/OglicC0R20 fatcat:p6o45vsy5jejjoooodls3dmylm

Various Speech Processing Techniques For Speech Compression And Recognition

Jalal Karam
2007 Zenodo  
In this paper we include the different representations of speech in the time-frequency and time-scale domains for the purpose of compression and recognition.  ...  Years of extensive research in the field of speech processing for compression and recognition in the last five decades, resulted in a severe competition among the various methods and paradigms introduced  ...  ACKNOWLEDGMENT The author would like to thank Gulf University for Science and Technology for their financial support of this publication.  ... 
doi:10.5281/zenodo.1330742 fatcat:2bvb7z6sunfdpndmvkg4kgw3wm

Distributing Recognition in Computational Paralinguistics

Zixing Zhang, Eduardo Coutinho, Jun Deng, Bjorn Schuller
2014 IEEE Transactions on Affective Computing  
We conduct large-scale evaluations of some key functions, namely, feature compression/decompression, model training and classification, on five common paralinguistic tasks related to emotion, intoxication  ...  The proposed architecture favors large-scale data collection and continuous model updating, personal information protection, and transmission bandwidth optimization.  ...  The authors would also thank to J€ urgen Geiger for his feedback on an early version of this paper.  ... 
doi:10.1109/taffc.2014.2359655 fatcat:olqvz67r7nbwpmp34eslj7j33u

Emotion Recognition from Noisy Speech

Mingyu You, Chun Chen, Jiajun Bu, Jia Liu, Jianhua Tao
2006 2006 IEEE International Conference on Multimedia and Expo  
The performance of our system is also robust when speech data is corrupted by increasing noise.  ...  features before classifying the emotional states of clean and noisy speech.  ...  Feature selection and feature extraction are two categories of methods for compressing data set.  ... 
doi:10.1109/icme.2006.262865 dblp:conf/icmcs/YouCBLT06 fatcat:izd4mak7sjhr7adqsusrf3uhqa

Low-Dimensional Bottleneck Features for On-Device Continuous Speech Recognition [article]

David B. Ramsay, Kevin Kilgour, Dominik Roblek, Matthew Sharifi
2018 arXiv   pre-print
Low power digital signal processors (DSPs) typically have a very limited amount of memory in which to cache data.  ...  only a minimal loss of accuracy.  ...  Acknowledgments The authors would like to acknowledge Ron Weiss and the Google Brain and Speech teams for their LAS implementation, Félix de Chaumont Quitry and Dick Lyon for their feedback and support  ... 
arXiv:1811.00006v1 fatcat:swzjeltaavbs7anw4pvaggscs4

Speech comprehension is correlated with temporal response patterns recorded from auditory cortex

E. Ahissar, S. Nagarajan, M. Ahissar, A. Protopapas, H. Mahncke, M. M. Merzenich
2001 Proceedings of the National Academy of Sciences of the United States of America  
Of these two correlates, PL was significantly more indicative for single-trial success.  ...  Speech comprehension depends on the integrity of both the spectral content and temporal envelope of the speech signal.  ...  We thank Susanne Honma and Tim Roberts for technical support, Kensuke Sekihara for providing his software for MUSIC analyais of localization data, and Hagai Attias for his help with data analysis.  ... 
doi:10.1073/pnas.201400998 pmid:11698688 pmcid:PMC60877 fatcat:dlopvunalrapxfrh26t6hi2qcm
« Previous Showing results 1 — 15 out of 26,026 results