109 Hits in 5.0 sec

An Experiment on Handshape Sign Recognition Using Adaptive Technology: Preliminary Results [chapter]

Hemerson Pistori, João José Neto
2004 Lecture Notes in Computer Science  
This paper presents an overview of current work on the recognition of sign language and a prototype of a simple editor for a small subset of the Brazilian Sign Language, LIBRAS.  ...  Handshape based alphabetical signs, are captured by a single digital camera, processed on-line by using computational vision techniques and converted to the corresponding Latin letter.  ...  Our editor is already being used experimentally on a novel machine learning approach, based on adaptive decision trees (AdapTree [6] ), which has as one of its main advantage the fact of being incremental  ... 
doi:10.1007/978-3-540-28645-5_47 fatcat:25sa7wkrqfg6nh5f6vyiq3z3qu

Lexicon-Free Fingerspelling Recognition from Video: Data, Models, and Signer Adaptation [article]

Taehwan Kim, Jonathan Keane, Weiran Wang, Hao Tang, Jason Riggle, Gregory Shakhnarovich, Diane Brentari, Karen Livescu
2016 arXiv   pre-print
Our best-performing models are segmental (semi-Markov) conditional random fields using deep neural network-based features.  ...  The multi-signer setting is much more challenging, but with neural network adaptation we achieve up to 83% letter accuracies in this setting.  ...  Experimental Results We report on experiments using the fingerspelling data from the four ASL signers described above.  ... 
arXiv:1609.07876v1 fatcat:liyz6vbydbdgrihjuqk3lotusm

American Sign Language fingerspelling recognition from video: Methods for unrestricted recognition and signer-independence [article]

Taehwan Kim
2016 arXiv   pre-print
In this work, we propose several types of recognition approaches, and explore the signer variation problem.  ...  In this thesis, we study the problem of recognizing video sequences of fingerspelled letters in American Sign Language (ASL).  ...  Finally, we also consider adaptation by fine-tuning all of the DNN weights on adaptation data, starting from the signer-independent DNN weights. 29 30 Chapter 3 Experimental Results We report on experiments  ... 
arXiv:1608.08339v1 fatcat:2ebkdpi32bfi7kqgfbtlvpuf4u

Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning

S.C.W. Ong, S. Ranganath
2005 IEEE Transactions on Pattern Analysis and Machine Intelligence  
However, successful recognition of lexical signs is not sufficient for a full understanding of sign language communication.  ...  Research in automatic analysis of sign language has largely focused on recognizing the lexical (or citation) form of sign gestures as they appear in continuous signing, and developing algorithms that scale  ...  In an adaptive fuzzy expert system ( [30] ) by Holden and Owens [63] , signs were classified based on start and end handshapes and finger motion, using triangular fuzzy membership functions, whose parameters  ... 
doi:10.1109/tpami.2005.112 pmid:15943420 fatcat:7kcyx45j2fc63fqrtjkomuddsm

Handling sign language handshapes annotationwith the typannot typefont

Patrick Doan, Dominique Boutet, AdrienContesse, Claudia S. Bianchini, Claire Danet, Morgane Rébulard, Jean-François Dauphin, Léa Chevrefils, Chloé Thomas, Mathieu Reguer
2019 CogniTextes  
Some preliminary results will be presented (see §8). 2.  ...  them using adaptations of existing phonographic systems.  ...  Typannot usage on SL linguistics : preliminary results 47 The use of Typannot for the HS (complete typeface) and for the LOC parameter (only the graphemic formula) reveals some preliminary results concerning  ... 
doi:10.4000/cognitextes.1401 fatcat:tauwupaj3ffybhyojmj45hvt3m

PARLOMA – A Novel Human-Robot Interaction System for Deaf-Blind Remote Communication

Ludovico Orlando Russo, Giuseppe Airò Farulla, Daniele Pianu, Alice Rita Salgarella, Marco Controzzi, Christian Cipriani, Calogero Maria Oddo, Carlo Geraci, Stefano Rosa, Marco Indaco
2015 International Journal of Advanced Robotic Systems  
At present, there is no existing technological solution enabling two (or many) deaf-blind people to communicate remotely among themselves in tactile Sign Language (t-SL).  ...  We present a preliminary version of PARLOMA, a novel system to enable remote communication between deaf-blind persons.  ...  The full system is implemented on the Robot Operating System (ROS).  ... 
doi:10.5772/60416 fatcat:75v54i6d5jhhtdruiwd7y5l6oi

LSE-Sign: A lexical database for Spanish Sign Language

Eva Gutierrez-Sigut, Brendan Costello, Cristina Baus, Manuel Carreiras
2015 Behavior Research Methods  
The LSE-Sign database is a free online tool for selecting Spanish Sign Language stimulus materials to be used in experiments.  ...  Stimulus material Psycholinguistic research on sign language has traditionally focused on investigating whether spoken and sign language  ...  The creation of the LSE-Sign tool was partially funded by grants LSE -SIGN PSI 2008-0416-E/EPSIC from the Ministerio de Ciencia e Innovación and PSI2012-31448 from the Ministerio de Economía y Competitividad  ... 
doi:10.3758/s13428-014-0560-1 pmid:25630312 fatcat:5ayjyjuayrdy7doqnaqwfdo6ty

A Multimodal User Interface for an Assistive Robotic Shopping Cart

Dmitry Ryumin, Ildar Kagirov, Alexandr Axyonov, Nikita Pavlyuk, Anton Saveliev, Irina Kipyatkova, Milos Zelezny, Iosif Mporas, Alexey Karpov
2020 Electronics  
The use of multimodal interfaces, namely the speech and gesture modalities, make human-robot interaction natural and intuitive, as well as sign language recognition allows hearing-impaired people to use  ...  Among the main topics covered in this paper are the presentation of the interface (three modalities), the single-handed gesture recognition system (based on a collected database of Russian sign language  ...  Preliminary Experiments and Results This section presents preliminary experiments conducted during the development of gesture interface development.  ... 
doi:10.3390/electronics9122093 fatcat:mbczgqcvjrftpesvnkd23cl2ee

Accessibility as a Service: Augmenting Multimedia Content with Sign Language Video Tracks

Tiago Maritan Ugulino de Araújo, Felipe Lacet S. Ferreira, Danilo Assis Nobre dos S. Silva, Eduardo De Lucena Falcão, Leonardo Dantas de Oliveira, Leonardo Araújo Domingues, Yúrika Sato Nóbrega, Hozana Raquel Gomes De Lima, Alexandre Nóbrega Duarte, Guido Lemos de Souza Filho
2013 Journal of research and practice in information technology  
As a case study, we made an implementation of the service for providing support for the Brazilian Sign Language (LIBRAS) and some preliminary tests with Brazilian deaf users to evaluate the proposed solution  ...  The service organizes the collaboration of sign language experts to dynamically adjust the system that runs on a cloud computing infrastructure.  ...  /ec2/) with support from Amazon's AWS in Education Research Grant, which funded the instances we used for the experiments.  ... 
dblp:journals/acj/AraujoFSFODNLDF13 fatcat:dxzdarceg5cohj6hj5t7rkshva

Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning

Boon Giin Lee, Teak-Wei Chong, Wan-Young Chung
2020 Sensors  
The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures.  ...  Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve  ...  Related Works With the dramatic development of human-computer interaction technology over previous decades, various studies have been conducted worldwide on the development of sign language recognition  ... 
doi:10.3390/s20216256 pmid:33147891 fatcat:wkkem6yeefaslcrmlyqxyolmhi

A Proposed Pedagogical Mobile Application for Learning Sign Language

Samir Abou El-Seoud, Islam Taj-Eddin, Ann Nosseir, Hosam El-Sofany, Nadine Abu Rumman
2013 International Journal of Interactive Mobile Technologies  
The user uses the graphical system to view and to acquire knowledge about sign grammar and syntax based on the local vernacular particular to the country.  ...  A handheld device system, such as cellular phone or a PDA, can be used in acquiring Sign Language (SL). The developed system uses graphic applications.  ...  In a preliminary experimental result, it has been shown the effectiveness of the novel 3D agent sign language learning system.  ... 
doi:10.3991/ijim.v7i1.2387 fatcat:t6z4uqttdja3hdywdgpf45pzuu

Sign Language Lexicography in the Early 21st Century and a Recently Published Dictionary of Sign Language of the Netherlands

I. Zwitserlood
2010 International Journal of Lexicography  
Therefore, this article will contain background information on signed languages and the communities in which they are used, on the lexicography of sign languages, the situation in the Netherlands as well  ...  Sign language lexicography has thus far been a relatively obscure area in the world of lexicography.  ...  Acknowledgements This paper has much benefitted from comments on earlier versions by Adam Schembri, Ernst Thoutenhooft and an anonymous reviewer.  ... 
doi:10.1093/ijl/ecq031 fatcat:j2omhi4y3zaifdk75lvqkv4cz4

Recognition of Signed Expressions in an Experimental System Supporting Deaf Clients in the City Office

Tomasz Kapuscinski, Marian Wysocki
2020 Sensors  
The paper addresses the recognition of dynamic Polish Sign Language expressions in an experimental system supporting deaf people in an office when applying for an ID card.  ...  Preliminary observations and conclusions from the use of the system in a laboratory, as well as in real conditions with an experimental installation in the Office of Civil Affairs are given.  ...  Experimental Results Dataset and Tools One-hundred-twenty-one Polish Sign Language expressions used to submit an application for an ID card or collect an ID card were recognized.  ... 
doi:10.3390/s20082190 pmid:32294930 pmcid:PMC7218867 fatcat:jopgo7dmdrez3pkylwxnkduxfy

Transfer Learning in Sign language

Ali Farhadi, David Forsyth, Ryan White
2007 2007 IEEE Conference on Computer Vision and Pattern Recognition  
This is advantageous because one could use large amounts of labelled avatar data in combination with a smaller amount of labelled human data to spot a large number of words in human data.  ...  We build word models for American Sign Language (ASL) that transfer between different signers and different aspects.  ...  We are currently studying the use of these technologies for activity recognition.  ... 
doi:10.1109/cvpr.2007.383346 dblp:conf/cvpr/FarhadiFW07 fatcat:vm55ghpdb5hfvcvbde67ifoexq

Gesture, sign, and language: The coming of age of sign language and gesture studies

Susan Goldin-Meadow, Diane Brentari
2015 Behavioral and Brain Sciences  
We end by calling for new technology that may help us better calibrate the borders between sign and gesture.  ...  certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture.  ...  Results illustrate that different recognition strategies are in play between these groups, because the lexicality effect was present only in deaf individuals using mainly sign language to communicate.  ... 
doi:10.1017/s0140525x15001247 pmid:26434499 pmcid:PMC4821822 fatcat:c6gugoyrrna67ak2xhy34ycx5a
« Previous Showing results 1 — 15 out of 109 results