Novel Speech Motion Generation by Modeling Dynamics of Human Speech Production

Kurima Sakai, Takashi Minato, Carlos T. Ishi, Hiroshi Ishiguro
2017 Frontiers in Robotics and AI  
We developed a method to automatically generate humanlike trunk motions based on speech (i.e., the neck and waist motions involved in speech) for a conversational android from its speech in real time. To generate humanlike movements, the android's mechanical limitation (i.e., limited number of joints) needs to be compensated for. By enforcing the synchronization of speech and motion in the android, the method enables us to compensate for its mechanical limitations. Moreover, motion can be
more » ... ted to express emotions by tuning the parameters in the dynamical model. This method is based on a spring-damper dynamical model driven by voice features to simulate the human trunk movements involved in speech. In contrast to the existing methods based on machine learning, our system can easily modulate the motions generated due to speech patterns because the model's parameters correspond to muscle stiffness. The experimental results show that the android motions generated by our model can be perceived as more natural and thus motivate users to talk longer with it compared to a system that simply copies human motions. In addition, our model generates emotional speech motions by tuning its parameters.
doi:10.3389/frobt.2017.00049 fatcat:jkybo7y6wnfatpngov2dlrdyqe