Filters








27 Hits in 4.4 sec

Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces

Florent Bocquelet, Thomas Hueber, Laurent Girin, Christophe Savariaux, Blaise Yvert, Gabriel Mindlin
2016 PLoS Computational Biology  
Real-Time Articulatory-Based Speech Synthesis for BCIs PLOS Computational Biology | Fig 1. Articulatory-based speech synthesizer.  ...  The articulatory-to-acoustic mapping is performed using a deep neural network (DNN) trained on electromagnetic articulography (EMA) data recorded on a reference speaker synchronously with the produced  ...  Acknowledgments The authors wish to thank Silvain Gerber for his help in the statistical analysis of the results. Author Contributions Conceived and designed the experiments: FB TH LG BY.  ... 
doi:10.1371/journal.pcbi.1005119 pmid:27880768 pmcid:PMC5120792 fatcat:5zg6yqunvfda7aclv5th7rfbmu

Key considerations in designing a speech brain-computer interface

Florent Bocquelet, Thomas Hueber, Laurent Girin, Stéphan Chabardès, Blaise Yvert
2016 Journal of Physiology - Paris  
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not.  ...  The authors also wish to thank Marie-Pierre Gilotin and Manuela Oddoux for clinical help during awake surgery and the patient who participated in the study.  ...  The special case of articulatory-based speech synthesis The use of an articulatory speech synthesizer can be of particular interest for a BCI application for several reasons.  ... 
doi:10.1016/j.jphysparis.2017.07.002 pmid:28756027 fatcat:5jdkvghmgfczvbjybvhroamnmy

Key Considerations In Designing A Speech Brain-Computer Interface

Florent Bocquelet, Thomas Hueber, Laurent Girin, Stéphan Chabardès, Blaise Yvert
2018 Zenodo  
Bocquelet F, Hueber T, Girin L, Chabardès S, Yvert B (2016) Key considerations in designing a speech brain computer interface. J Physiol Paris, 110: 392-401  ...  The authors also wish to thank Marie-Pierre Gilotin and Manuela Oddoux for clinical help during awake surgery and the patient who participated in the study.  ...  The special case of articulatory-based speech synthesis The use of an articulatory speech synthesizer can be of particular interest for a BCI application for several reasons.  ... 
doi:10.5281/zenodo.1242931 fatcat:ztdpzu4g7zfvtnehqfhasvp7nu

Brain2Char: A Deep Architecture for Decoding Text from Brain Recordings [article]

Pengfei Sun and Gopala K. Anumanchipalli and Edward F. Chang
2019 arXiv   pre-print
To do this, we impose auxiliary losses on latent representations for articulatory movements, speech acoustics and session specific non-linearities.  ...  In this study, we propose a novel deep network architecture Brain2Char, for directly decoding text (specifically character sequences) from direct brain recordings (called Electrocorticography, ECoG).  ...  Pasley In speech processing applications like automatic speech recognition (ASR) and text-to-speech synthesis (TTS), much progress has been made to achieve near-human performance on standard benchmarks  ... 
arXiv:1909.01401v1 fatcat:mmmj75x7v5dhdk2jfgfqnzpm3m

The Potential for a Speech Brain–Computer Interface Using Chronic Electrocorticography

Qinwan Rabbani, Griffin Milsap, Nathan E. Crone
2019 Neurotherapeutics  
This review discusses and outlines the current state-of-the-art for speech BCI and explores what a speech BCI using chronic ECoG might entail.  ...  A BCI for speech would enable communication in real time via neural correlates of attempted or imagined speech.  ...  deep neural network to map articulations to their corresponding acoustic outputs.  ... 
doi:10.1007/s13311-018-00692-2 pmid:30617653 pmcid:PMC6361062 fatcat:6y66u77cdreb7jhku666wklyha

Silent Speech Interfaces for Speech Restoration: A Review

Jose A. Gonzalez-Lopez, Alejandro Gomez-Alanis, Juan M. Martin-Donas, Jose L. Perez-Cordoba, Angel M. Gomez
2020 IEEE Access  
INDEX TERMS Silent speech interface, speech restoration, automatic speech recognition, speech synthesis, deep neural networks, brain computer interfaces, speech and language disorders, voice disorders,  ...  From the biosignals, SSIs decode the intended message, using automatic speech recognition or speech synthesis algorithms.  ...  For direct speech synthesis, various neural network architectures have been investigated, including feed-forward neural networks [27] , [99] , [101] , convolutional neural networks (CNNs) [170] -  ... 
doi:10.1109/access.2020.3026579 fatcat:yvvaebeavfdfrav73sfs62a5dm

SPEAK YOUR MIND! Towards Imagined Speech Recognition With Hierarchical Deep Learning [article]

Pramit Saha, Muhammad Abdul-Mageed, Sidney Fels
2019 arXiv   pre-print
signal responsible for natural speech synthesis.  ...  In order to infer imagined speech from active thoughts, we propose a novel hierarchical deep learning BCI system for subject-independent classification of 11 speech tokens including phonemes and words.  ...  Yet, there is hardly any work investigating the applicability and performance of such deep learning techniques for speech imagery-based BCI.  ... 
arXiv:1904.05746v1 fatcat:6gqhqy3yyrefpiyjjyevw22l6q

Silent Speech Interfaces for Speech Restoration: A Review [article]

Jose A. Gonzalez-Lopez, Alejandro Gomez-Alanis, Juan M. Martín-Doñas, José L. Pérez-Córdoba, Angel M. Gomez
2020 arXiv   pre-print
From the biosignals, SSIs decode the intended message, using automatic speech recognition or speech synthesis algorithms.  ...  tracking of articulator movements using imaging techniques.  ...  For direct speech synthesis, various neural network architectures have been investigated, including feed-forward neural networks [27] , [99] , [101] , convolutional neural networks (CNNs) [170] -  ... 
arXiv:2009.02110v2 fatcat:i2o4zxqko5anhn2eqivtnsd2di

SPEAK YOUR MIND! Towards Imagined Speech Recognition with Hierarchical Deep Learning

Pramit Saha, Muhammad Abdul-Mageed, Sidney Fels
2019 Interspeech 2019  
signal responsible for natural speech synthesis.  ...  In order to infer imagined speech from active thoughts, we propose a novel hierarchical deep learning BCI system for subject-independent classification of 11 speech tokens including phonemes and words.  ...  Yet, there is hardly any work investigating the applicability and performance of such deep learning techniques for speech imagery-based BCI.  ... 
doi:10.21437/interspeech.2019-3041 dblp:conf/interspeech/SahaAF19 fatcat:jvo6xsnwjjc2rct5xsrvc2p4cy

Intelligible speech synthesis from neural decoding of spoken sentences [article]

Gopala K Anumanchipalli, Josh Chartier, Edward F Chang
2018 bioRxiv   pre-print
A recurrent neural network first decoded vocal tract physiological signals from direct cortical recordings, and then transformed them to acoustic speech output.  ...  Additionally, speech decoding was not only effective for audibly produced speech, but also when participants silently mimed speech.  ...  For this purpose, we used an existing annotated speech 392 database (Wall Street Journal Corpus) 49 and trained speaker independent deep recurrent 393 network regression models to predict these place-manner  ... 
doi:10.1101/481267 fatcat:hed472oeyvgxjl5a6rhsy3klwu

Speech synthesis from neural decoding of spoken sentences

Gopala K. Anumanchipalli, Josh Chartier, Edward F. Chang
2019 Nature  
Recurrent neural networks first decoded directly recorded cortical activity into representations of articulatory movement, and then transformed these representations into speech acoustics.  ...  Technology that translates neural activity into speech would be transformative for people who are unable to communicate as a result of neurological impairments.  ...  Moses for comments on the manuscript and B. Speidel for his help reconstructing MRI images. This work was supported by grants from the NIH (DP2 OD008627 and U01 NS098971-01).  ... 
doi:10.1038/s41586-019-1119-1 pmid:31019317 fatcat:7taeckhko5fhnbk4gwio4y2ogy

Biosignal Sensors and Deep Learning-Based Speech Recognition: A Review

Wookey Lee, Jessica Jiwon Seong, Busra Ozlu, Bong Sup Shim, Azizbek Marakhimov, Suan Lee
2021 Sensors  
We especially research various deep learning technologies related to voice recognition, including visual speech recognition, silent speech interface, and analyze its flow, and systematize them into a taxonomy  ...  Novel approaches should have been developed for speech recognition and production because that would seriously undermine the quality of life and sometimes leads to isolation from society.  ...  Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/s21041399 pmid:33671282 fatcat:je4cmqkulnbmpbji3owxgr7f24

Decoding Imagined and Spoken Phrases From Non-invasive Neural (MEG) Signals

Debadatta Dash, Paul Ferrari, Jun Wang
2020 Frontiers in Neuroscience  
Two machine learning algorithms were used. One was an artificial neural network (ANN) with statistical features as the baseline approach.  ...  Direct decoding of imagined speech from the neural signals (and then driving a speech synthesizer) has the potential for a higher communication rate.  ...  Hernandez-Mulero and Saleem Malik for their help on the data collection at Cook Children's Hospital, Fort Worth, TX. We also thank Dr. Ted Mau, Dr. Myungjong Kim, Dr. Mark McManis, Dr.  ... 
doi:10.3389/fnins.2020.00290 pmid:32317917 pmcid:PMC7154084 fatcat:o7exd5plyjhfnmitlt7qnotf34

A survey of deep neural network architectures and their applications

Weibo Liu, Zidong Wang, Xiaohui Liu, Nianyin Zeng, Yurong Liu, Fuad E. Alsaadi
2017 Neurocomputing  
In this paper, we discuss some widely-used deep learning architectures and their practical applications.  ...  Deep learning approaches have also been found to be suitable for big data analysis with successful applications to computer vision, pattern recognition, speech recognition, natural language processing,  ...  Additionally, deep learning techniques can also be used for head motion synthesis [37] and speech enhancement [81] .  ... 
doi:10.1016/j.neucom.2016.12.038 fatcat:nkxvbhp47rfflpi5jev7hk4yq4

Decoding spoken English phonemes from intracortical electrode arrays in dorsal precentral gyrus [article]

Guy H. Wilson, Sergey D. Stavisky, Francis R. Willett, Donald T. Avansino, Jessica N. Kelemen, Leigh R. Hochberg, Jaimie M. Henderson, Shaul Druckmann, Krishna V. Shenoy
2020 bioRxiv   pre-print
direction for speech BCIs.  ...  basis set for speech: 39 English phonemes.  ...  We also thank Professor Mark Slutsky for providing the many words list; our Stanford NPTL and NPSL group for helpful discussions; Beverly Davis, Erika Siauciunas, and Nancy Lam for administrative support  ... 
doi:10.1101/2020.06.30.180935 fatcat:ebx5dfa62je2basdnuhv245b6i
« Previous Showing results 1 — 15 out of 27 results