3,456 Hits in 3.2 sec

Combining Textual and Visual Representations for Multimodal Author Profiling: Notebook for PAN at CLEF 2018

Sebastián Sierra, Fabio A. González
2018 Conference and Labs of the Evaluation Forum  
of the user (profile).  ...  This year's task consisted of evaluating gender using multimodal information (text and images) which was extracted from Twitter users.  ...  Multimodal approaches for Author Profiling have been considered by [1, 14, 25] . Álvarez-Carmona et al.  ... 
dblp:conf/clef/SierraG18 fatcat:hmduiukzhzclppl526z4ob2q5q

A Deep Learning Framework for Multimodal Course Recommendation Based on LSTM+Attention

Xinwei Ren, Wei Yang, Xianliang Jiang, Guang Jin, Yan Yu
2022 Sustainability  
The model uses course video, audio, and title and introduction for multimodal fusion.  ...  To solve this problem, we propose a deep course recommendation model with multimodal feature extraction based on the Long- and Short-Term Memory network (LSTM) and Attention mechanism.  ...  Videos contain rich modal features, including video, audio, text, tags, and other multimodal data. Through these multimodal features, users' deep interests can be mined.  ... 
doi:10.3390/su14052907 doaj:8371c9e0f6d64ee9a45830a3629a8137 fatcat:epahechflbdapcqxjg4mottnuu

Identifying Illicit Drug Dealers on Instagram with Large-scale Multimodal Data Fusion [article]

Chuanbo Hu, Minglei Yin, Bin Liu, Xin Li, Yanfang Ye
2021 arXiv   pre-print
We then design a quadruple-based multimodal fusion method to combine the multiple data sources associated with each user account for drug dealer identification.  ...  Unlike existing methods that focus on posting-based detection, we propose to tackle the problem of illicit drug dealer identification by constructing a large-scale multimodal dataset named Identifying  ...  User profiling through deep multimodal fusion. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. 171–179. [14] Santo Fortunato. 2010.  ... 
arXiv:2108.08301v2 fatcat:r5omsmxaenfslcy6zdkt427ggq

IEEE Access Special Section Editorial: Intelligent Biometric Systems for Secure Societies

Marina L. Gavrilova, Gee-Sern Hsu, Khalid Saeed, Svetlana Yanushkevich
2021 IEEE Access  
Member, IEEE) is currently a Full Professor with the CSPS Department and an international expert in the area of biometric security, machine learning, pattern recognition, data analytics, and information fusion  ...  Her list of publications includes three coauthored books, over 30 books of conference proceedings, and more than 200 peer-reviewed articles on machine learning, biometric security, and multimodal cognitive  ...  The authors propose a person identification system that relies on users' writing profiles, reply, retweet, shared weblinks, trendy topic networks, and temporal profiles extracted from users' social behavioral  ... 
doi:10.1109/access.2021.3078343 fatcat:uoxaq7ldnnfdddk5c32ujjoccu

Multimodal Deep Learning for Activity and Context Recognition

Valentin Radu, Catherine Tong, Sourav Bhattacharya, Nicholas D. Lane, Cecilia Mascolo, Mahesh K. Marina, Fahim Kawsar
2018 Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies  
This paper studies the benefits of adopting deep learning algorithms for interpreting user activity and context as captured by multi-sensor systems.  ...  Wearables and mobile devices see the world through the lens of half a dozen low-power sensors, such as, barometers, accelerometers, microphones and proximity detectors.  ...  Multimodal Sensor Fusion. Conceptually, classification models based on multimodal sensor data have a clear relationship to techniques of sensor fusion.  ... 
doi:10.1145/3161174 fatcat:dvp6jljcx5a23lw4knaimlcv3m

A Multi-Modality Deep Network for Cold-Start Recommendation

2018 Big Data and Cognitive Computing  
Secondly, most algorithms only use content features as the prior knowledge to improve the estimation of user and item profiles but the ratings do not directly provide feedback to guide feature extraction  ...  However, CF approaches suffer from the cold-start problem for users and items with few ratings.  ...  Rating prediction with deep fused embedding. Deep Fusion for Multimodal Embedding We propose a general deep fusion framework for multimodal embedding (feature extraction).  ... 
doi:10.3390/bdcc2010007 fatcat:b22oovdzyjaetlxjvpepj2bbpq

Learn to Combine Modalities in Multimodal Deep Learning [article]

Kuan Liu, Yanen Li, Ning Xu, Prem Natarajan
2018 arXiv   pre-print
In this work we propose a novel deep neural network based technique that multiplicatively combines information from different source modalities.  ...  We demonstrate the effectiveness of our proposed technique by presenting empirical results on three multimodal classification tasks from different domains.  ...  Deep neural networks are very actively explored in multimodal fusion [38] .  ... 
arXiv:1805.11730v1 fatcat:dlrbee6obbcjjgxk2ethfyxocm


Feida Zhu, Yongfeng Zhang, Neil Yorke-Smith, Guibing Guo, Xu Chen
2018 Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining - WSDM '18  
With the advent of web 2.0, quite a lot of multimodal information has been accumulated, which provides us with the opportunity to profile users in a more comprehensive manner.  ...  The key of a successful recommendation system lies in the accurate user/item profiling.  ...  THEME AND TOPICS Papers should elaborate on theories and methods of information fusion for user modeling and personalization.  ... 
doi:10.1145/3159652.3160592 dblp:conf/wsdm/ZhuZYGC18 fatcat:a4q4sgddybedthzhrkq7d3m7nu

Methods of AI for Multimodal Sensing and Action for Complex Situations

Erik Blasch, Robert Cruise, Alexander Aved, Uttam Majumder, Todd Rovito
2019 The AI Magazine  
We propose a decisions-to-data multimodal sensor and action through contextual agents (human or machine) that seek, combine, and make sense of relevant data.  ...  A decisions-to-data example is presented of a command-guided swarm requiring contextual data analysis, systems-level design, and user interaction for effective and efficient multimodal sensing and action  ...  Deep Multimodal Image Fusion Analysis. ( A) Swarm of sensors. (B) Imagery showing number of pixels for object recognition.  ... 
doi:10.1609/aimag.v40i4.4813 fatcat:mfjsclsdsjek7blltazmtwftd4

Multimodal Authentication of Ocular Biometric and Finger Vein Verification in Smartphones: A Review

Dheeraj Hebri, Vasudeva .
2018 International Journal of Engineering & Technology  
Finally, possible directions in the multimodal biometric authentication system for the future work are also discussed.  ...  This paper attempts to review various recent and advanced multimodal finger vein and ocular biometric authentication systems.  ...  There are three types of fusion methods that are used in the multimodal biometric system, which include fusion at the feature extraction level, matcher score level and fusion at the decision level (Kalra  ... 
doi:10.14419/ijet.v7i3.12.15909 fatcat:d2l6qttibvgvzjadwj4bx7cfui

Convolution Neural Network Based Deep Feature Fusion for Palmprint and Handvein

2019 International Journal of Engineering and Advanced Technology  
A distinctive Deep Convolution Neural Network (CNN) outline architecture model is presented in this work which proficiently signify complex image features.  ...  The new method presented in our model has been part with and it gives 99% of GAR in biometric verification of unimodal system and extremely the experimentation has condensed results when combined with deep  ...  To enhance such issues we have adopted, deep convolutional neural networks (DCNN) can learn higher-level features from enormous training samples through this deep architecture.  ... 
doi:10.35940/ijeat.f9217.109119 fatcat:xvlktgenwrgz3jbx4zyy4fszui

Multimodal Fusion Algorithm and Reinforcement Learning-Based Dialog System in Human-Machine Interaction

Hanif Fakhrurroja, Institut Teknologi Bandung, School of Electrical Engineering and Informatics, Indonesia, Carmadi Machbub, Ary Setijadi Prihatmanto, Ayu Purwarianti, Institut Teknologi Bandung, School of Electrical Engineering and Informatics, Indonesia, Institut Teknologi Bandung, School of Electrical Engineering and Informatics, Indonesia, Institut Teknologi Bandung, School of Electrical Engineering and Informatics, Indonesia
2020 International Journal on Electrical Engineering and Informatics  
The level of user satisfaction towards the multimodal recognition-based human-machine interaction system developed was 95%.  ...  The research contributes to an easier and more natural humanmachine interaction system using multimodal fusion-based systems.  ...  Furthermore, the accuracy of multimodal fusion systems can be improved by adding machine learning methods, such as deep reinforcement learning.  ... 
doi:10.15676/ijeei.2020.12.4.19 fatcat:tun3mqo3a5cn7d5sui7bdd2o6y

Audio-visual encoding of multimedia content for enhancing movie recommendations

Yashar Deldjoo, Mihai Gabriel Constantin, Hamid Eghbal-Zadeh, Bogdan Ionescu, Markus Schedl, Paolo Cremonesi
2018 Proceedings of the 12th ACM Conference on Recommender Systems - RecSys '18  
This paper presents the method proposed for the recommender system task in Mediaeval 2018 on predicting user global ratings given to movies and their standard deviation through the audiovisual content  ...  The best results are obtained in cross modal fusion for i-vector + AVF (compare 0.54 v.s Genre: 0.53) and BLF + Deep (0.54).  ...  Another example is [11] , in which a hybrid MRS using tags and ratings is proposed, where user profiles are formed based on users' interaction in a social movie network.  ... 
doi:10.1145/3240323.3240407 dblp:conf/recsys/DeldjooCEISC18 fatcat:2z7njwv5fbhw7ng55446grwidu

Going Beyond RF: How AI-enabled Multimodal Beamforming will Shape the NextG Standard [article]

Debashri Roy, Batool Salehi, Stella Banou, Subhramoy Mohanti, Guillem Reus-Muns, Mauro Belgiovine, Prashant Ganesh, Carlos Bocanegra, Chris Dick, Kaushik Chowdhury
2022 arXiv   pre-print
This so called idea of multimodal beamforming will require deep learning based fusion techniques, which will serve to augment the current RF-only and classical signal processing methods that do not scale  ...  The survey describes relevant deep learning architectures for multimodal beamforming, identifies computational challenges and the role of edge computing in this process, dataset generation tools, and finally  ...  Fig. 11 : 11 Fig.11: Proposed multi-level deep fusion framework at ultimate layers for multimodal beamforming.  ... 
arXiv:2203.16706v1 fatcat:44pger2flveondbtachzhcdgam

Score and Rank Level Fusion Algorithms for Social Behavioral Biometrics

Sanjida Nasreen Tumpa, Marina L. Gavrilova
2020 IEEE Access  
The experimental results establish that the users' writing profiles have the highest impact over other social biometric features and that score level fusion algorithms perform better than rank level fusion  ...  This research investigates the impact of users' writing profiles on OSN to conclude whether such profiles contribute to SBB.  ...  This can assist online network users in securing their accounts through continuous authentication. In addition, anonymous criminal activities in the OSN can be identified through this research area.  ... 
doi:10.1109/access.2020.3018958 fatcat:5f3x2hxn3bhstgqg2hgxipc33y
« Previous Showing results 1 — 15 out of 3,456 results