A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
A Framework for Biometric and Interaction Performance Assessment of Automated Border Control Processes
2017
IEEE Transactions on Human-Machine Systems
The first, the Generic Model, maps separately the enrolment and verification phases of an ABC scenario. ...
The second, the Identity Claim Process, decomposes the verification phase of the Generic Model to an enhanced resolution of ABC implementations. ...
Deviation from a generic model may give indicators to user performance being affected by a new transaction sequence. ...
doi:10.1109/thms.2016.2611822
fatcat:w4rcwuhfb5ayjp2lgfccuw4lua
The MPI Emotional Body Expressions Database for Narrative Scenarios
2014
PLoS ONE
We report the results of physical motion properties analysis and of an emotion categorisation study. The reactions of observers from the emotion categorisation study are included in the database. ...
Moreover, many of these databases consist of video recordings which limit the ability to manipulate and analyse the physical properties of these stimuli. ...
As a result of the classification study, as many as 85% of motion sequences have a unique modal value in the distribution of observers' categorisation. ...
doi:10.1371/journal.pone.0113647
pmid:25461382
pmcid:PMC4252031
fatcat:lq7rtfpm5zea3dk5n37lghv5ha
Rule-Based Real-Time ADL Recognition in a Smart Home Environment
[chapter]
2016
Lecture Notes in Computer Science
This paper presents a rule-based approach for both offline and real-time recognition of Activities of Daily Living (ADL), leveraging events produced by a non-intrusive multi-modal sensor infrastructure ...
General rights This document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available: Abstract. ...
-If the active video sequence with the highest confidence agrees with PIR, we conclude the user is in the room. ...
doi:10.1007/978-3-319-42019-6_21
fatcat:a22pk7qa2zdsbafsa32obytcle
Sentic blending: Scalable multimodal fusion for the continuous interpretation of semantics and sentics
2013
2013 IEEE Symposium on Computational Intelligence for Human-like Intelligence (CIHLI)
The capability of interpreting the conceptual and affective information associated with natural language through different modalities is a key issue for the enhancement of human-agent interaction. ...
space that enables the generation of a continuous stream characterising user's semantic and sentic progress over time, despite the outputs of the unimodal categorical modules have very different time-scales ...
Success ratios considerably increase, meaning that the adopted classification strategy is consistent with human classification. # Module
1
2
Modality
text
video
Categorisation model
Hourglass of ...
doi:10.1109/cihli.2013.6613272
dblp:conf/cihli/CambriaHHH13
fatcat:3doea7r3rvdmzjj523w2fxuijq
A mobile sense of place: exploring a novel mixed methods user-centred approach to capturing data on urban cycling infrastructure
2021
Applied Mobilities
The paper explores a user-centred methodology for collecting, categorising, visualising, and interpreting data on urban cycling infrastructure and related cycling events. ...
Data from one Major Cycle Route is used to explore methods of data categorisation, visualisation and interpretation. ...
Disclosure statement No potential conflict of interest was reported by the authors. ...
doi:10.1080/23800127.2021.1893941
fatcat:lwsjtpbgv5c2bhbiablzj33dtm
Looping out loud: A multimodal analysis of humour on Vine
2017
The European Journal of Humour Research
the messages of the videos they post on Vine. ...
Findings show that users create instant characters to amplify the impact of their solo video recordings, use Vine as a "humorous confessional", explore the potential of hand-held media by relying on "one ...
Marcos, thank you for your precious contribution in the collection and categorisation of data and for being a great person to work with. ...
doi:10.7592/ejhr2016.4.4.marone
fatcat:uybgg7cwwnhmzelx4pwijia53m
MIMiC: Multimodal Interactive Motion Controller
2011
IEEE transactions on multimedia
The user commands can come from any modality including auditory, touch and gesture. ...
Results show real-time interaction and plausible motion generation between different types of movement. ...
Figure 1 (C) shows a 2D tracked contour of a face generated from a video sequence of a person listening to a speaker. ...
doi:10.1109/tmm.2010.2096410
fatcat:vtpytjhlyvf2vixh2vbjiwfwfq
Techniques used and open challenges to the analysis, indexing and retrieval of digital video
2007
Information Systems
of video archives based on video content, as easy as searching and browsing (text) web pages. ...
In this paper we give a brief review of the state of the art of video analysis, indexing and retrieval and we point to research directions which we think are promising and could make searching and browsing ...
If a user is to use keyframe based matching as part of retrieval then the user must be very precise in selecting query images and in general we find that keyframe based video retrieval is seldom used in ...
doi:10.1016/j.is.2006.09.001
fatcat:fwbnujz3tnaz7kbvmys3xo7lua
Unconstrained Monocular 3D Human Pose Estimation by Action Detection and Cross-Modality Regression Forest
2013
2013 IEEE Conference on Computer Vision and Pattern Recognition
We therfore present a framework which applies action detection and 2D pose estimation techniques to infer 3D poses in an unconstrained video. ...
Instead of holistic features, e.g. silhouettes, we leverage the flexibility of deformable part model to detect 2D body parts as a feature to estimate 3D poses. ...
APE dataset was collected with the help of the Imperial Computer Vision and Learning Lab. ...
doi:10.1109/cvpr.2013.467
dblp:conf/cvpr/YuKC13
fatcat:wqjabynqhbearpfrnsw22lkj7m
Audio-driven Talking Face Video Generation with Learning-based Personalized Head Pose
[article]
2020
arXiv
pre-print
Extensive experiments and two user studies show that our method can generate high-quality (i.e., personalized head movements, expressions and good lip synchronization) talking face videos, which are naturally ...
However, most existing talking face video generation methods only consider facial animation with fixed head pose. ...
RELATED WORK
Talking face generation Existing talking face video generation methods can be broadly categorised into two classes according to the driven signal. ...
arXiv:2002.10137v2
fatcat:jmxixru7gvb5bnvfleuaeybnee
Methods, Tools and Techniques for Multimodal Analysis of Accommodation in Intercultural Communication
2018
Computational Methods in Science and Technology
Previously tested methods of quantitative accommodation analysis are adjusted, supplemented with new custom procedures, and applied to each channel under study as well as to the cross-modal (e.g., prosody ...
The holistic approach to interpersonal communication in dialogue, involving the analysis of multiple sensory modalities and channels, poses a serious challenge not only in terms of research techniques ...
Paralinguistic aspects of intercultural communication). ...
doi:10.12921/cmst.2018.0000006
fatcat:3e2mgti4engmln7t6an4frfama
Evaluating a synthetic talking head using a dual task: Modality effects on speech understanding and cognitive load
2013
International Journal of Human-Computer Studies
Second, an AV advantage was hypothesized and supported by significantly shorter latencies for the AV modality on the primary task of Experiment 3 and with partial support in Experiment 1. ...
The dual task is a data-rich paradigm for evaluating speech modes of a synthetic talking head. ...
We thank Stelarc, Damith Herath for coordinating the programming of the experiment, Staci Parlato-Harris for assistance with data analysis, and three reviewers for helpful comments. ...
doi:10.1016/j.ijhcs.2012.12.003
fatcat:clnrztfjv5epdivutt6uyxgkou
Performance of image guided navigation in laparoscopic liver surgery – A systematic review
2021
Surgial oncology
The aim of this systematic review is to provide an overview of their current capabilities and limitations. ...
Titles and abstracts of retrieved articles were screened for inclusion criteria. Due to the heterogeneity of the retrieved data it was not possible to conduct a meta-analysis. ...
Generally, solutions can be categorised into biomechanical models and data driven models. ...
doi:10.1016/j.suronc.2021.101637
pmid:34358880
fatcat:iik25ikn7nhntfngbvdocnpuci
RGB-D-based Human Motion Recognition with Deep Learning: A Survey
[article]
2018
arXiv
pre-print
Particularly, we highlighted the methods of encoding spatial-temporal-structural information inherent in video sequence, and discuss potential directions for future research. ...
The reviewed methods are broadly categorized into four groups, depending on the modality adopted for recognition: RGB-based, depth-based, skeleton-based and RGB+D-based. ...
In all sequences, a single user is recorded in front of the camera, performing natural communicative Italian gestures. ...
arXiv:1711.08362v2
fatcat:cugugpqeffcshnwwto4z2aw4ti
How do teens learn to play video games?
2019
Journal of Information Literacy
This set of modalities, categories and oppositions should be considered as a first step in the construction of a set of analytical tools for describing and classifying ILS in the context of teens' video ...
After briefly outlining the situation of ILS and teens' transmedia skills, in the context of a general reflection on information literacy (IL) and transmedia literacy (TL), the methodological aspects of ...
The research confirmed the centrality of YouTube in teens' lives. It is a key element of their media culture and, in some cases, it has become their main source of information. ...
doi:10.11645/13.1.2358
fatcat:bixnhvjyazb5nhmco7wlpiaiqe
« Previous
Showing results 1 — 15 out of 3,958 results