A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2016; you can also visit the original URL.
The file type is application/pdf
.
Online Cross-Modal Adaptation for Audio–Visual Person Identification With Wearable Cameras
2016
IEEE Transactions on Human-Machine Systems
We propose an audio-visual target identification approach for egocentric data with cross-modal model adaptation. The proposed approach blindly and iteratively adapts the timedependent models of each modality to varying target appearance and environmental conditions using the posterior of the other modality. The adaptation is unsupervised and performed on-line, thus models can be improved as new unlabelled data become available. In particular, accurate models do not deteriorate when a modality
doi:10.1109/thms.2016.2620110
fatcat:goygftl3dfdcbmpvk2vuzbu5mi