473 Hits in 6.4 sec

FluentSigners-50: A signer independent benchmark dataset for sign language processing

Medet Mukushev, Aidyn Ubingazhibov, Aigerim Kydyrbekova, Alfarabi Imashev, Vadim Kimmelman, Anara Sandygulova, Aaron Jon Newman
2022 PLoS ONE  
This paper presents a new large-scale signer independent dataset for Kazakh-Russian Sign Language (KRSL) for the purposes of Sign Language Processing.  ...  We envision it to serve as a new benchmark dataset for performance evaluations of Continuous Sign Language Recognition (CSLR) and Translation (CSLT) tasks.  ...  Datasets used for continuous sign language recognition. 1 This list excludes datasets of isolated signs. Deaf column indicates if deaf signers contributed to the dataset.  ... 
doi:10.1371/journal.pone.0273649 pmid:36094924 pmcid:PMC9467305 fatcat:qtin6pe4krb5jdnch4otybulsq

Experimenting the Automatic Recognition of Non-Conventionalized Units in Sign Language

Valentin Belissen, Annelies Braffort, Michèle Gouiffès
2020 Algorithms  
We then redefined the problem of automatic slr as the recognition of linguistic descriptors, with carefully thought out performance metrics.  ...  Moreover, we developed a compact and generalizable representation of signers in videos by parallel processing of the hands, face and upper body, then an adapted learning architecture based on a rcnn.  ...  Automatic Continuous Sign Language Recognition: State of the Art In this article, we focus on Continuous Sign Language (CSL) only, leaving out the case of isolated signs, which does not involve any language  ... 
doi:10.3390/a13120310 fatcat:wat4g5hsd5b3xghb4liplocy2u

Recent developments in visual sign language recognition

Ulrich von Agris, Jörg Zieren, Ulrich Canzler, Britta Bauer, Karl-Friedrich Kraiss
2007 Universal Access in the Information Society  
The classification stage is designed for recognition of isolated signs, as well as of continuous sign language.  ...  The current state in sign language recognition is roughly 30 years behind speech recognition, which corresponds to the gradual transition from isolated to continuous recognition for small vocabulary tasks  ...  Classification of isolated signs Based on the BSL-Corpus, recognition performance for isolated signs was evaluated for both signer-dependent and signer-independent operation.  ... 
doi:10.1007/s10209-007-0104-x fatcat:kdyboduv3jeavoflcfvjocsnia

Continuous Sign Language Recognition through a Context-Aware Generative Adversarial Network

Ilias Papastratis, Kosmas Dimitropoulos, Petros Daras
2021 Sensors  
Our proposed method achieved word error rates of 23.4%, 2.1%, and 2.26% on the RWTH-Phoenix-Weather-2014 and the Chinese Sign Language (CSL) and Greek Sign Language (GSL) Signer Independent (SI) datasets  ...  To this end, a novel approach for context-aware continuous sign language recognition using a generative adversarial network architecture, named as Sign Language Recognition Generative Adversarial Network  ...  SLR tasks are divided into Isolated Sign Language Recognition (ISLR) [3] [4] [5] and Continuous Sign Language Recognition (CSLR) [6] [7] [8] .  ... 
doi:10.3390/s21072437 pmid:33916231 pmcid:PMC8038055 fatcat:ct6yygvm2nf45edx2c4ymc4anq

Continuous sign language recognition: Towards large vocabulary statistical recognition systems handling multiple signers

Oscar Koller, Jens Forster, Hermann Ney
2015 Computer Vision and Image Understanding  
We experimentally show the importance of tracking for sign language recognition with respect to the hands and facial landmarks.  ...  This work presents a statistical recognition approach performing large vocabulary continuous sign language recognition across different signers.  ...  Some works [19, 4] present corpora for isolated and continuous sign language recognition for German, Greek, British and French sign language created in the course of the Dicta-Sign 3 project.  ... 
doi:10.1016/j.cviu.2015.09.013 fatcat:rhfqteappvhkpcir2fjpbvf46a

Dynamic–static unsupervised sequentiality, statistical subunits and lexicon for sign language recognition

Stavros Theodorakis, Vassilis Pitsikalis, Petros Maragos
2014 Image and Vision Computing  
Data-driven subunits Sub-sign phonetic modeling Unsupervised Segmentation HMM We introduce a new computational phonetic modeling framework for sign language (SL) recognition.  ...  The novel sign language modeling scheme is evaluated in recognition experiments on data from three corpora and two sign languages: Boston University American SL which is employed pre-segmented at the sign-level  ...  Acknowledgments This work was supported by the EU research program Dicta-Sign with grant FP7-ICT-3-231135.  ... 
doi:10.1016/j.imavis.2014.04.012 fatcat:xsoyismawfbfni2nkrbm3etfbu

BSL-1K: Scaling up co-articulated sign language recognition using mouthing cues [article]

Samuel Albanie and Gül Varol and Liliane Momeni and Triantafyllos Afouras and Joon Son Chung and Neil Fox and Andrew Zisserman
2021 arXiv   pre-print
Language (BSL) signs of unprecedented scale; (2) We show that we can use BSL-1K to train strong sign recognition models for co-articulated signs in BSL and that these models additionally form excellent  ...  In this work, we introduce a new scalable approach to data collection for sign recognition in continuous videos.  ...  334 sign classes as the metric for performance.  ... 
arXiv:2007.12131v2 fatcat:g3ag5liqeje7zjp3p6dqngmem4

Machine Translation from Signed to Spoken Languages: State of the Art and Challenges [article]

Mathieu De Coster, Dimitar Shterionov, Mieke Van Herreweghe, Joni Dambre
2022 arXiv   pre-print
We recommend iterative, human-in-the-loop, design and development of sign language translation models.  ...  Based on our findings, we advocate for interdisciplinary research and to base future research on linguistic analysis of sign languages.  ...  Wolfert for their comments and suggestions.  ... 
arXiv:2202.03086v3 fatcat:g2t2vovwjjgxlevtzv26nbf7z4

Text2Sign: Towards Sign Language Production Using Neural Machine Translation and Generative Adversarial Networks

Stephanie Stoll, Necati Cihan Camgoz, Simon Hadfield, Richard Bowden
2020 International Journal of Computer Vision  
We further demonstrate the video generation capabilities of our approach for both multi-signer and high-definition settings qualitatively and quantitatively using broadcast quality assessment metrics.  ...  Our system is capable of producing sign videos from spoken language sentences.  ...  Acknowledgements This work was funded by the SNSF Sinergia project "Scalable Multimodal Sign Language Technology for Sign Language Learning and Assessment" (SMILE) grant Agreement No.  ... 
doi:10.1007/s11263-019-01281-2 fatcat:2ygle7xzazgatpyxfakakmcs7y

Master Thesis: Neural Sign Language Translation by Learning Tokenization [article]

Alptekin Orbay
2020 arXiv   pre-print
In this thesis, we propose a multitask learning based method to improve Neural Sign Language Translation (NSLT) consisting of two parts, a tokenization layer and Neural Machine Translation (NMT).  ...  The tokenization part focuses on how Sign Language (SL) videos should be represented to be fed into the other part.  ...  Sign Language Recognition refers to a number of recognition tasks. In the isolated setting, it may refer to identifying the labels of a sign; which are often glosses.  ... 
arXiv:2011.09289v1 fatcat:44ix6kzoozhd5c6awbfa5gi7fu

Deep Learning for Sign Language Recognition: Current Techniques, Benchmarks, and Open Issues

Muhammad Al-Qurishi, Thariq Khalid, Riad Souissi
2021 IEEE Access  
We conducted a comprehensive review of automated sign language recognition based on machine/deep learning methods and techniques published between 2014 and 2021 and concluded that the current methods require  ...  Thus, we turned our attention to elements that are common to almost all sign language recognition methodologies.  ...  A typical dataset contains multiple repetitions of the same sign by several signers, with the objective of facilitating signer-independent recognition capacity after training.  ... 
doi:10.1109/access.2021.3110912 fatcat:mcjehb6znjcijhk2wzgdxbmzqq

A Multimodal User Interface for an Assistive Robotic Shopping Cart

Dmitry Ryumin, Ildar Kagirov, Alexandr Axyonov, Nikita Pavlyuk, Anton Saveliev, Irina Kipyatkova, Milos Zelezny, Iosif Mporas, Alexey Karpov
2020 Electronics  
The main features of the presented prototype are voice and gesture-based interfaces with Russian speech and sign language recognition and synthesis techniques and a high degree of robot autonomy.  ...  Among the main topics covered in this paper are the presentation of the interface (three modalities), the single-handed gesture recognition system (based on a collected database of Russian sign language  ...  A functional diagram of the method of single-hand movements video analysis for recognizing signs of sign language (i.e., isolated commands) is shown in Figure 11 .  ... 
doi:10.3390/electronics9122093 fatcat:mbczgqcvjrftpesvnkd23cl2ee

TSPNet: Hierarchical Feature Learning via Temporal Semantic Pyramid for Sign Language Translation [article]

Dongxu Li, Chenchen Xu, Xin Yu, Kaihao Zhang, Ben Swift, Hanna Suominen, Hongdong Li
2020 arXiv   pre-print
Sign language translation (SLT) aims to interpret sign video sequences into text-based natural language sentences.  ...  Existing SLT models usually represent sign visual features in a frame-wise manner so as to avoid needing to explicitly segmenting the videos into isolated signs.  ...  Transferring cross-domain knowledge for video sign language recognition.  ... 
arXiv:2010.05468v1 fatcat:qoj67klu2va6jk4v6klg37bhwa

Translating bus information into sign language for deaf people

V. López-Ludeña, C. González-Morcillo, J.C. López, R. Barra-Chicote, R. Cordoba, R. San-Segundo
2014 Engineering applications of artificial intelligence  
Both systems are made up of a natural language translator (for converting a word sentence into a sequence of LSE signs), and a 3D avatar animation module (for playing back the signs).  ...  This paper describes the application of language translation technologies for generating bus information in Spanish Sign Language (LSE: Lengua de Signos Española) .  ...  Authors also want to thank Mark Hallett for the English revision.  ... 
doi:10.1016/j.engappai.2014.02.006 fatcat:6ujtim4bu5ebddufoltzvulvy4

Analyzing Multimodal Communication around a Shared Tabletop Display [chapter]

Anne Marie Piper, James D. Hollan
2009 ECSCW 2009  
We compare communication mediated by a multimodal tabletop display and by a human sign language interpreter.  ...  We thank our study participants, faculty and staff from UCSD Medical School, Whitney Friedman, and MERL for donating a DiamondTouch table. References Argyle, M. and M.  ...  For example, representing speech on a shared display has pedagogical benefits for language learning.  ... 
doi:10.1007/978-1-84882-854-4_17 dblp:conf/ecscw/PiperH09 fatcat:i7dlputr6rbppltmb7lqhxtxcm
« Previous Showing results 1 — 15 out of 473 results