Filters








292 Hits in 5.8 sec

SOMA: Solving Optical Marker-Based MoCap Automatically [article]

Nima Ghorbani, Michael J. Black
2021 arXiv   pre-print
Marker-based optical motion capture (mocap) is the "gold standard" method for acquiring accurate 3D human motion in computer vision, medicine, and graphics.  ...  To enable learning, we generate massive training sets of simulated noisy and ground truth mocap markers animated by 3D bodies from AMASS.  ...  Introduction Marker-based optical motion capture (mocap) systems record 2D infrared images of light reflected or emitted by a set of markers placed at key locations on the surface of a subject's body.  ... 
arXiv:2110.04431v1 fatcat:ow5af3etmjcxvh2l2qt26uvgaq

Development and Validation of a Deep Learning Algorithm and Open-Source Platform for the Automatic Labelling of Motion Capture Markers

Allison L. Clouthier, Gwyneth B. Ross, Matthew P. Mavor, Isabel Coll, Alistair Boyle, Ryan B. Graham
2021 IEEE Access  
The purpose of this work was to develop an open-source deep learning-based algorithm for motion capture marker labelling that can be trained on measured or simulated marker trajectories.  ...  INDEX TERMS Optical motion capture, marker labelling, machine learning, biomechanics. 36444 This work is licensed under a Creative Commons Attribution 4.0 License.  ...  INTRODUCTION Optical motion capture has been widely used for entertainment, clinical, and research applications to quantify human motion.  ... 
doi:10.1109/access.2021.3062748 fatcat:kg46za644zf5jlfmezdjmm36sa

Development and validation of a deep learning algorithm and open-source platform for the automatic labelling of motion capture markers [article]

Allison L Clouthier, Gwyneth B Ross, Matthew P Mavor, Isabel Coll, Alistair Boyle, Ryan B Graham
2021 bioRxiv   pre-print
The purpose of this work was to develop an open-source deep learning-based algorithm for motion capture marker labelling that can be trained on measured or simulated marker trajectories.  ...  The proposed labelling algorithm can be used to accurately label motion capture data in the presence of missing and extraneous markers and accuracy can be improved as data are collected, labelled, and  ...  Our aim was to develop an open-source algorithm that can automatically label motion capture markers using machine learning in the presence of occluded and extraneous markers.  ... 
doi:10.1101/2021.02.08.429993 fatcat:fysqcnmbczb7xlfjmqwzwl4xje

What Does a Hand-Over Tell?—Individuality of Short Motion Sequences

Holger H. Bekemeier, Jonathan W. Maycock, Helge J. Ritter
2019 Biomimetics  
We study this question by selecting a set of individual participant characteristics and analysingmotion captured trajectories of an exemplary class of familiar movements, namely handover of anobject to  ...  How much information with regard to identity and further individual participantcharacteristics are revealed by relatively short spatio-temporal motion trajectories of a person?  ...  Abbreviations The following abbreviations are used in this manuscript: MANOVA multivariate analyses of variance RANOVA repeated measures analyses of variance  ... 
doi:10.3390/biomimetics4030055 pmid:31394826 pmcid:PMC6784304 fatcat:r6k5ge6agbfpvnxelbrhdxbese

Deep Graph Pose: a semi-supervised deep graphical model for improved animal pose tracking [article]

Anqi Wu, Estefany Kelly Buchanan, Matthew R Whiteway, Michael Schartner, Guido T Meijer, Jean-Paul Noel, Erica Rodriguez, Claire Everett, Amy Norovich, Evan S Schaffer, Neeli Mishra, C. Daniel Salzman (+4 others)
2020 bioRxiv   pre-print
In this work, we improve on these methods (particularly in the regime of few training labels) by leveraging the rich spatiotemporal structures pervasive in behavioral video --- specifically, the spatial  ...  Noninvasive behavioral tracking of animals is crucial for many scientific investigations. Recent transfer learning approaches for behavioral tracking have considerably advanced the state of the art.  ...  Acknowledgments and Disclosure of Funding We thank the authors of DeepLabCut [4] for generously sharing their code and data. This work was supported by grants from the Wellcome Trust  ... 
doi:10.1101/2020.08.20.259705 fatcat:ytw7f4thq5cfrbv2ia6glccvu4

The MAHNOB Mimicry Database: A database of naturalistic human interactions

Sanjay Bilakhia, Stavros Petridis, Anton Nijholt, Maja Pantic
2015 Pattern Recognition Letters  
The best reported results are session-dependent, and affected by the sparsity of positive examples in the data.  ...  To provide a benchmark for efforts in machine understanding of mimicry behaviour, we report a number of baseline experiments based on visual data only.  ...  They used motion energy, optical flow, and prosodic features, and calculated cross-correlation and magnitude coherence between all pairs of features, in one-second windows.  ... 
doi:10.1016/j.patrec.2015.03.005 fatcat:wd2vg6acyffkhcn2w3duftxngu

Neural Scene Decomposition for Multi-Person Motion Capture [article]

Helge Rhodin, Victor Constantin, Isinsu Katircioglu, Mathieu Salzmann,, Pascal Fua
2019 arXiv   pre-print
However, when it comes to 3D motion capture of multiple people, these features are only of limited use.  ...  limited labeled data.  ...  This work was supported by the Swiss National Science Foundation and a Microsoft JRC Project.  ... 
arXiv:1903.05684v1 fatcat:emq6hahrarbhtdqkeu5gyrvhrm

Neural Scene Decomposition for Multi-Person Motion Capture

Helge Rhodin, Victor Constantin, Isinsu Katircioglu, Mathieu Salzmann, Pascal Fua
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
However, when it comes to 3D motion capture of multiple people, these features are only of limited use.  ...  limited labeled data.  ...  This work was supported by the Swiss Innovation Agency and by a Microsoft Joint Research Project.  ... 
doi:10.1109/cvpr.2019.00789 dblp:conf/cvpr/RhodinCKSF19 fatcat:i6fuz4c5jnartoe6dp5zzkdbum

Unsupervised Behaviour Analysis and Magnification (uBAM) using Deep Learning [article]

Biagio Brattoli, Uta Buechler, Michael Dorkenwald, Philipp Reiser, Linard Filli, Fritjof Helmchen, Anna-Sophia Wahl, Bjoern Ommer
2021 arXiv   pre-print
A central aspect is unsupervised learning of posture and behaviour representations to enable an objective comparison of movement.  ...  State-of-the-art instrumented movement analysis is time- and cost-intensive, since it requires placing physical or virtual markers.  ...  in every frame of a video in order to capture the fine-grained details of the behavior.  ... 
arXiv:2012.09237v3 fatcat:yiz7fmblefdrzlm2k7orjr3oga

Representation, Analysis, and Recognition of 3D Humans

Stefano Berretti, Mohamed Daoudi, Pavan Turaga, Anup Basu
2018 ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)  
An increasing importance is also assumed by learning solutions, where hand-crafted descriptors are substituted by deep features that are learned directly from the data.  ...  way; • volumetric: the volume delimited by the shape surface is accounted for by the representation; • topological: topological variations of the shape are captured by the representation; • landmarks  ...  Applications were shown in modalities as diverse as marker-based motion capture, RGBD sensors, and activity quality assessment for applications in stroke rehabilitation. Covariance Matrix (CM).  ... 
doi:10.1145/3182179 fatcat:ds55t4md2na2tibtyg4llerf3q

Latent Variable Algorithms for Multimodal Learning and Sensor Fusion [article]

Lijiang Guo
2019 arXiv   pre-print
A gating modular neural network dynamically generates a set of mixing weights for outputs from sensor networks by balancing utility of all sensors' information.  ...  Multimodal learning has been lacking principled ways of combining information from different modalities and learning a low-dimensional manifold of meaningful representations.  ...  Acknowledgement Part of this work is based on Lijiang Guo's PhD qualify exam paper. We would like to thank Dr. Geoffrey Fox, Dr. Minje Kim, Dr. Francesco Nesta, Dr. Michael Ryoo and Dr.  ... 
arXiv:1904.10450v1 fatcat:6634ghs74fcd3fz3l4nov4rb3m

Cellular Level Brain Imaging in Behaving Mammals: An Engineering Approach

Elizabeth J.O. Hamel, Benjamin F. Grewe, Jones G. Parker, Mark J. Schnitzer
2015 Neuron  
Fluorescence imaging offers expanding capabilities for recording neural dynamics in behaving mammals, including the means to monitor hundreds of cells targeted by genetic type or connectivity, track cells  ...  We discuss recent progress and future directions for imaging in behaving mammals from a systems engineering perspective, which seeks holistic consideration of fluorescent indicators, optical instrumentation  ...  Notably, image analysis by ICA is well known in multiple contexts to be robust to modest levels of motion artifact and violations of the assumption of independence (McKeown et al., 1998; Mukamel et al  ... 
doi:10.1016/j.neuron.2015.03.055 pmid:25856491 pmcid:PMC5758309 fatcat:4mllguw6fzbl3okx4y5lxdiiqe

Machine Learning on Human Connectome Data from MRI [article]

Colin J Brown, Ghassan Hamarneh
2016 arXiv   pre-print
Recently, researchers have been exploring the application of machine learning models to connectome data in order to predict clinical outcomes and analyze the importance of subnetworks in the brain.  ...  Finally, we conclude by summarizing the current state of the art and by outlining what we believe are strategic directions for future research.  ...  First, a null distribution of the frequency selection of each feature is created by training a random forest on data with permuted class labels.  ... 
arXiv:1611.08699v1 fatcat:opmtmr3eejbjjm4swfmg54g4q4

Chaos in magnetic flux ropes

Walter Gekelman, Bart Van Compernolle, Tim DeHaas, Stephen Vincena
2014 Plasma Physics and Controlled Fusion  
Conditional averaging is possible for only a number of rotation cycles as the field line motion becomes chaotic.  ...  Each collision results in magnetic field line reconnection and the generation of a quasi-separatrix layer.  ...  The algorithm splits the data up in vectors of length n, wherein each sample of the vector is separated by a lag J , and J is chosen as the lag at which the auto-correlation of the time trace drops below  ... 
doi:10.1088/0741-3335/56/6/064002 fatcat:dz6wywr4s5aorlt2fmu3sqf2m4

In-Ear Accelerometer-Based Sensor for Gait Classification

Clara Piris, Lea Gartner, Miguel A. Gonzalez, Jerome Noailly, Fabian Stocker, Martin Schonfelder, Tim Adams, Simone Tassani
2020 IEEE Sensors Journal  
For several years, the detection of gait has been popularly implemented using wearable sensors, especially in the sports and medical areas.  ...  The purpose of this paper is to demonstrate the accuracy and reliability of in-ear accelerometer sensor to perform gait classification, between the activities walking and running.  ...  ACKNOWLEDGMENT The authors would like to thank all the students who participated in this study.  ... 
doi:10.1109/jsen.2020.3002589 fatcat:oid4iyvxzvar5kremi72y7vbby
« Previous Showing results 1 — 15 out of 292 results