A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
Imaging a cognitive model of apraxia: The neural substrate of gesture-specific cognitive processes
2004
Human Brain Mapping
However, a functional analysis of brain imaging data suggested that one single memory store may be used for "to-be-perceived" and "to-be-produced" gestural representations, departing from Rothi et al.' ...
verbal command, imitation of familiar gestures, imitation of novel gestures, and an action-semantic task that consisted in matching objects for functional use. ...
ACKNOWLEDGMENTS We thank the technical staff of the Centre de Recherches du Cyclotron for kind and professional assistance, A. Komaromi for support and gesture demonstration on videotapes, P. ...
doi:10.1002/hbm.10161
pmid:14755833
fatcat:dvgefh72cnem3czofmal5fpg34
A Role for the Action Observation Network in Apraxia After Stroke
2019
Frontiers in Human Neuroscience
These included a meaningless gesture imitation task, a gesture production task involving pantomiming transitive and intransitive gestures, and a gesture recognition task involving recognition of these ...
In a large cohort of unselected stroke patients with lesions to the left, right, and bilateral hemispheres, we used voxel-based lesion-symptom mapping (VLSM) on clinical CT head images to identify the ...
Gesture Recognition In the Gesture Recognition Task, the examiner produced six actions, which patients had to recognize: three transitive (using a cup, using a key, using a lighter) and three intransitive ...
doi:10.3389/fnhum.2019.00422
pmid:31920586
pmcid:PMC6933001
fatcat:oc3ip5cd2nc5hn7r5lnghhreqq
Thematic knowledge, artifact concepts, and the left posterior temporal lobe: Where action and object semantics converge
2016
Cortex
To test this hypothesis, we evaluated processing of taxonomic and thematic relations for artifact and natural objects as well as artifact action knowledge (gesture recognition) abilities in a large sample ...
Moreover, response times for identifying thematic relations for artifacts selectively predicted performance in gesture recognition. ...
Branch Coslett with lesion identification and Alexis Kington with testing patients. This research was funded by NIH grant R01-NS065049 to Laurel Buxbaum. ...
doi:10.1016/j.cortex.2016.06.008
pmid:27389801
pmcid:PMC4969110
fatcat:f54hi5rjqjhu7j6xadd5bncfjy
From single cells to social perception
2011
Philosophical Transactions of the Royal Society of London. Biological Sciences
Principally, we describe cells recorded from the non-human primate, although a limited number of cells have been recorded in humans, and are included in order to appraise the validity of non-human physiological ...
Research describing the cellular coding of faces in non-human primates often provides the underlying physiological framework for our understanding of face processing in humans. ...
OPPONENT AND POPULATION CODING A recent study of cells responsive to faces in the middle temporal patch confirms that cells are tuned to several facial features and to their configuration (as had been ...
doi:10.1098/rstb.2010.0352
pmid:21536557
pmcid:PMC3130376
fatcat:nvjhz6g4jfeo7gcyio24inja3y
A Platform for Building New Human-Computer Interface Systems that Support Online Automatic Recognition of Audio-Gestural Commands
2016
Proceedings of the 2016 ACM on Multimedia Conference - MM '16
It includes a component for acquiring multimodal user data which is used as input to a module responsible for training audio-gestural models. ...
We introduce a new framework to build human-computer interfaces that provide online automatic audio-gestural command recognition. ...
ACKNOWLEDGMENTS This research work was supported by the EU under the projects MOBOT with grant FP7-ICT-2011.2.1-600796 and I-SUPPORT with grant H2020-643666. ...
doi:10.1145/2964284.2973794
dblp:conf/mm/KardarisRPAM16
fatcat:pglaabzz2bhide6qebbnaofgp4
Head and eye egocentric gesture recognition for human-robot interaction using eyewear cameras
[article]
2022
arXiv
pre-print
In particular, we focus on head and eye gestures, and adopt an egocentric (first-person) perspective using eyewear cameras. ...
We argue that this egocentric view offers a number of conceptual and technical benefits over scene- or robot-centric perspectives. ...
Network architecture and training For gesture recognition, we propose a motion-based approach which processes image sequences at two temporal levels (Figure 1 ). ...
arXiv:2201.11500v1
fatcat:fki7y7crrvbbvfvxyjd7rbfpem
Conceptual and lexical effects on gestures: the case of vertical spatial metaphors for time in Chinese
2017
Language, Cognition and Neuroscience
Additionally, Chinese English bilinguals prefer vertical gestures to lateral gestures when perceiving Chinese time references with vertical spatial metaphors and the corresponding English translations, ...
In conclusion, the vertical gesturing about time by Chinese English bilinguals is shaped by both the stable language-specific conceptualisations, and the online changes in linguistic choices. ...
We thank Yeqiu Zheng for LANGUAGE, COGNITION AND NEUROSCIENCE ...
doi:10.1080/23273798.2017.1283425
fatcat:h2b4iejjbjbs3anibnp2bvnzry
Dyadic brain modelling, mirror systems and the ontogenetic ritualization of ape gesture
2014
Philosophical Transactions of the Royal Society of London. Biological Sciences
We earlier offered a conceptual model of the brain mechanisms that could support this process, using an example of a child's initial pulling on a mother eventually yielding a 'beckoning' gesture [10] . ...
However, a few group-specific gestures have been observed in ape populations, suggesting a role for social learning [3] . ...
gestural space'. ...
doi:10.1098/rstb.2013.0414
pmid:24778382
pmcid:PMC4006188
fatcat:a6zj3wj5o5eezpiru53jcre46m
Towards an interactive multimedia experience for club music and dance
2009
Proceedings of the 7th International Conference on Advances in Mobile Computing and Multimedia - MoMM '09
We describe our work in the application of three types of successful movement recognition applied in the field of Tai Chi with the objective being to identify gestural primitives of club dance associated ...
In this approach, dance movements are first recognized and classified and then mapped, using multiple levels of complexity, to higher level algorithms that can modify multimedia content in real time. ...
If regions are sparsely populated, then fewer codes are used to specify that space. ...
doi:10.1145/1821748.1821773
dblp:conf/momm/MajoeKS09
fatcat:btagq7xlpbhkvdx5mpxp3ubd34
Identifying Students' Conceptions of Basic Principles in Sequence Stratigraphy
2013
Journal of Geoscience education
Sequence stratigraphy is a major research subject in the geosciences academia and the oil industry. ...
Using constant comparative analysis, we documented students' conceptions about eustasy, relative sea level, base level, and accommodation. ...
We thank colleagues who helped to independently code the data, Lana Zimmer, Laura Weber, and Jiyoung Yi. Our recognition also goes to all the student participants who volunteered for this study. ...
doi:10.5408/12-290.1
fatcat:xdemqnikzjgblic2dr6vyzceca
The openinterface framework
2008
Proceeding of the twenty-sixth annual CHI conference extended abstracts on Human factors in computing systems - CHI '08
In addition, to enable the rapid exploration of the multimodal design space for a given system, we need to capitalize on past experiences and include a large set of multimodal interaction techniques, their ...
The OI underlying conceptual component model includes both generic and tailored components. ...
The user can navigate and zoom using several interaction techniques, such as (speech and gesture) or (pressure and gesture). Pressure is achieved using the interface-Z sensor. ...
doi:10.1145/1358628.1358881
dblp:conf/chi/SerranoNLRMD08
fatcat:f3oczozxgjbp3kybk3ax7fcdc4
Multimodal comprehension in left hemisphere stroke patients
2020
Cortex
Twenty-nine PWA and 15 matched controls were shown a picture of an object/action and then a video-clip of a speaker producing speech and/or gestures in one of the following combinations: speech-only, gesture-only ...
These conclusions are further supported by associations between performance in the experimental tasks and performance in tests assessing lexical-semantic processing and gesture recognition. ...
Patients also completed a control task to ensure they understood the verbs used in the gesture recognition tasks. ...
doi:10.1016/j.cortex.2020.09.025
pmid:33161278
pmcid:PMC8105917
fatcat:os67altqfvchvjnux4xipzwe4u
"It/I": A Theater Play Featuring an Autonomous Computer Character
2002
Presence - Teleoperators and Virtual Environments
with a complex temporal structure or a strong underlying narrative. ...
In particular we describe the interval script paradigm used to program the computer character and the ACTSCRIPT language for communication of actions and goals. ...
of some specific gestures (using [9] ). ...
doi:10.1162/105474602320935865
fatcat:464ghbn7t5gnppywyuboz7sjn4
Limb apraxias
2000
Brain
Dysfunction of the former would cause ideational (or conceptual) apraxia, gesture (meaningful versus meaningless) to be imitated. ...
Imitation of through which these errors are elicited, based on a twosystem model for the organization of action: a conceptual postures and movements also seems to be subserved by dedicated neural systems ...
helpful comments on the previous draft of the review, and both reviewers for detailed and produce praxic errors, such as abnormal limb orientation and configuration, resembling those observed in patients ...
doi:10.1093/brain/123.5.860
pmid:10775533
fatcat:n46nynlumrgtdmrvz6a3l5gouq
The quantification of gesture–speech synchrony: A tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking
2019
Behavior Research Methods
We further introduce and compare, using an actual gesture dataset (392 gesture events), the performance of two video-based motion-tracking methods (deep learning vs. pixel change) against a high-performance ...
In the present article, we provide a tutorial on more efficient methods to quantify the temporal properties of gesture kinematics, in which we focus on common challenges and possible solutions that come ...
Author note All data and analyses code used for this article are available at the Open Science Framework: https://osf.io/rgfv3/. ...
doi:10.3758/s13428-019-01271-9
pmid:31659689
fatcat:jvik6kriqbbbffzisahbuzhfh4
« Previous
Showing results 1 — 15 out of 12,140 results