A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is
Humans categorize objects by integrating various information obtained from the objects through sensory modalities such as visual, auditory, and haptic modalities. In this regard, Hagiwara et al. proposed a model that forms categories and signs between two agents manipulating objects. However, they only considered the case where each agent forms categories from a single modality. In this paper, we extend this previous model using multimodal latent dirichlet allocation (MLDA) so that each agentdoi:10.11517/pjsai.jsai2020.0_1q3gs1102 fatcat:3dfjyoxn6jgvvnqktrhtka4f6y