1,475 Hits in 3.2 sec

Semi-Synchronous Speech and Pen Input

Yasushi Watanabe, Kenji Iwata, Ryuta Nakagawa, Koichi Shinoda, Sadaoki Furui
2007 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07  
This paper proposes a new interface method using semi-synchronous speech and pen input for mobile environments.  ...  A multimodal recognition algorithm that can handle the asynchronisity of the two modes using a segment-based uni cation scheme is proposed.  ...  In addition, we developed an algorithm that combines speech and pen input in a segment-based uni cation scheme.  ... 
doi:10.1109/icassp.2007.366936 dblp:conf/icassp/WatanabeINSF07 fatcat:sezkkmbiejhp5g7ocved7duoji

A Model based Approach for Multimodal Biometric Recognition

Manas KumarChoudhury, Y.Srinivas Y.Srinivas
2014 International Journal of Computer Applications  
This paper presents a novel method for recognisation of user identity based on multiple traits.  ...  The performance of the model is evaluated using synthetic data and evaluation is carried out by considering metrics like False Acceptance Rate (FAR), and False Rejection Rate (FRR).  ...  The database consist of 100 fingerprint, 100 facial images and also consist of speech signals of the above 100 subjects.  ... 
doi:10.5120/18250-9338 fatcat:ot7mkwerwzbtljg3sd2wu4ez64

Multimodal Biometrics Based on Identification and Verification System

Osamah Al- Hamdani
2013 Journal of Biometrics & Biostatistics  
In order to reach a higher security level, an alternative multimodal and a fusion technique were implemented into the system.  ...  The need for an increase of reliability and security in a biometric system is motivated by the fact that there is no single technology that can realize multi-purpose scenarios.  ...  Acknowledgment We would like to thank the Centre of Biomedical Engineering, University Technology Malaysia (UTM) research grand (GUP-flagship Q.J130000.2436.00G31), and the Ministry of Higher Education  ... 
doi:10.4172/2155-6180.1000163 fatcat:4foaeaopfve2fkpv7543xzsceq

Multimodal Systems: Taxonomy, Methods, and Challenges [article]

Muhammad Zeeshan Baig, Manolya Kavakli
2020 arXiv   pre-print
The commonly used inputs for multimodal interaction are speech and gestures.  ...  The modalities are processed both sequentially and in parallel for communication in the human brain, this changes when humans interact with computers.  ...  Koons et al. also implemented an MMIS for a map-based application that uses speech and gesture for interaction [19] .  ... 
arXiv:2006.03813v1 fatcat:qenme7xocjeede374ck46ucx4u

Multi Modal Biometric System: A Review on Recognition Method

Dr. Gandhimathi Amirthalingam, Saranya Subramaniam
2017 International Journal of Engineering Research and  
An evaluation of multi biometric technology and its conclusions are also given.  ...  Methods that use multiple types of biometric sources for identification purposes (multi-modal biometric) are reviewed.  ...  The literature review has shown that the performance evaluation of multimodal biometrics for two and three modalities for different combinations of algorithms [15] .  ... 
doi:10.17577/ijertv6is050102 fatcat:z3u632ry5fgbbfcivesokd7ttu

Unobtrusive Multimodal Emotion Detection in Adaptive Interfaces: Speech and Facial Expressions [chapter]

Khiet P. Truong, David A. van Leeuwen, Mark A. Neerincx
2007 Lecture Notes in Computer Science  
First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions.  ...  Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions.  ...  In general, the use of certain multimodal features depends on the application that one has in mind and the allowed degree of obtrusiveness.  ... 
doi:10.1007/978-3-540-73216-7_40 fatcat:fl7cvonrcfchjoahwu6os3vphe

Multilingual Audio-Visual Smartphone Dataset And Evaluation [article]

Hareesh Mandalapu, Aravinda Reddy P N, Raghavendra Ramachandra, K Sreenivasa Rao, Pabitra Mitra, S R Mahadeva Prasanna, Christoph Busch
2021 arXiv   pre-print
Smartphones have been employed with biometric-based verification systems to provide security in highly sensitive applications.  ...  Audio-visual biometrics are getting popular due to their usability, and also it will be challenging to spoof because of their multimodal nature.  ...  Sébastien Marcel for the data capture mobile application developed as a part of SWAN (Secured access over Wide Area Network) project funded by the Research Council of Norway (Grant No.  ... 
arXiv:2109.04138v2 fatcat:a3tk7qj44fh6tielw6uopgjzh4

Multilingual Audio-Visual Smartphone Dataset And Evaluation

Hareesh Mandalapu, P N Aravinda Reddy, Raghavendra Ramachandra, K Sreenivasa Rao, Pabitra Mitra, S R Mahadeva Prasanna, Christoph Busch
2021 IEEE Access  
Smartphones have been employed with biometric-based verification systems to provide security in highly sensitive applications.  ...  The robustness of biometric algorithms is evaluated towards multiple dependencies like signal noise, device, language and presentation attacks like replay and synthesized signals with extensive experiments  ...  Sébastien Marcel for the data capture mobile application developed as a part of SWAN (Secured access over Wide Area Network) project funded by the Research Council of Norway (Grant No.  ... 
doi:10.1109/access.2021.3125485 fatcat:x4mgurwaybegtcmtdm4u2jguzu

The Survey of Architecture of Multi-Modal (Fingerprint and Iris Recognition) Biometric Authentication System

Afshan Ashraf, Isha Vats
2017 International Journal of Engineering Research and Applications  
This system provides effective fusion structure that combines information provided by the multiple field experts based on decision-level and score-level fusion method, thereby increasing the efficiency  ...  Biometrics based individual identification is observed as an effective technique for automatically knowing, with a high confidence a person's identity.  ...  excellence.Vincenzo Cont,2013 [15] In this section fingerprint and iris based uni-modal and multimodal confirmation systems will be describe, analyse and evaluate.  ... 
doi:10.9790/9622-0704031625 fatcat:uwjd7qlerbdb3kflwc73djyn7a

Transformed Secure Feed Forward Supervised Learning Method for Authentication in Multi-Model Biometric System

Multimodal biometric is used on variety of the application areas which are human computer interface, detection of the sensor through unique method.  ...  The physical and social characteristics are used for the identification of an individual using multimodal biometric system.  ...  In base paper, some issues take place due to the higher detection value. A method based on kernel mapping method based on support vector machine was developed through multimodal detection.  ... 
doi:10.35940/ijitee.i8449.078919 fatcat:mlp2ad5tgjcpbkklrlrn5w4q2m

Multimodal Emotion Recognition using Deep Learning

Sharmeen M.Saleem Abdullah Abdullah, Siddeeq Y. Ameen Ameen, Mohammed Mohammed sadeeq, Subhi Zeebaree
2021 Journal of Applied Science and Technology Trends  
This paper presents a review of emotional recognition of multimodal signals using deep learning and comparing their applications based on current studies.  ...  This would make it possible for people to survive and be used in widespread fields, including education and medicine.  ...  He suggested an SVM-based multimodal speech emotion recognition.  ... 
doi:10.38094/jastt20291 fatcat:2ofkuynxebgb5glhsaii5zcq4u

ElectroEmotion — A tool for producing emotional corpora collaboratively

Lassi A. Liikkanen, Giulio Jacucci, Matti Helin
2009 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops  
One difficulty in developing these applications is the lack of multimodal corpora suitable for multiple use contexts, such as public spaces.  ...  By performing a video-based interaction analysis, we found that the participants demonstrated spontaneous multimodal activity and more distinctively emotional expressions in response to the emotion induction  ...  ElectroEmotion was designed and implemented with the help of Rodolfo Samperio, Umair Khan, and Jari Kleimola, thank you!  ... 
doi:10.1109/acii.2009.5349576 dblp:conf/acii/LiikkanenJH09 fatcat:4iozxbsiy5fjxp5t4ajnqb3zrq

Multimodal Learning with Transformers: A Survey [article]

Peng Xu, Xiatian Zhu, David A. Clifton
2022 arXiv   pre-print
Thanks to the recent prevalence of multimodal applications and big data, Transformer-based multimodal learning has become a hot topic in AI research.  ...  , and multimodal Transformers, from a geometrically topological perspective, (3) a review of multimodal Transformer applications, via two important paradigms, i.e., for multimodal pretraining and for specific  ...  We hope that this survey gives a helpful and detailed overview for new researchers and practitioners, provides a convenient reference for relevant experts (e.g., multimodal machine learning researchers  ... 
arXiv:2206.06488v1 fatcat:6aoaczzbtvc43my2kmobo7glvy

Towards an intelligent framework for multimodal affective data analysis

Soujanya Poria, Erik Cambria, Amir Hussain, Guang-Bin Huang
2015 Neural Networks  
In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with usergenerated multimodal data in contexts such  ...  An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday.  ...  ) Async. feature fusion, audio and video 71.00% Dobri≤ek et al. (2013) GMM, audio and video 77.50% Proposed uni-modal method SVM, audio 78.57% Proposed uni-modal method SVM, text 78.70% Proposed uni-modal  ... 
doi:10.1016/j.neunet.2014.10.005 pmid:25523041 fatcat:xu4k5ywowfb2jgmuwgsdpozfzq

Bimodal HCI-related affect recognition

Zhihong Zeng, Jilin Tu, Ming Liu, Tong Zhang, Nicholas Rizzolo, Zhenqiu Zhang, Thomas S. Huang, Dan Roth, Stephen Levinson
2004 Proceedings of the 6th international conference on Multimodal interfaces - ICMI '04  
Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features.  ...  Perhaps the most fundamental application of affective computing would be Human-Computer Interaction (HCI) in which the computer is able to detect and track the user's affective states, and make corresponding  ...  Lawrence Chen for collecting the valuable data in this paper for audio-visual affect recognition.  ... 
doi:10.1145/1027933.1027958 dblp:conf/icmi/ZengTLZRZHRL04 fatcat:rtw7szlwobcnjcdl2nnoam5vjm
« Previous Showing results 1 — 15 out of 1,475 results