A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2005; you can also visit the original URL.
The file type is application/pdf
.
2003: Designing, Playing, and Performing with a Vision-Based Mouth Interface
[chapter]
2017
Current Research in Systematic Musicology
The role of the face and mouth in speech production as well as non-verbal communication suggests the use of facial action t o control musical sound. Here we document work on the Mouthesizer, a system which uses a headworn miniature camera and computer vision algorithm to extract shape parameters from the mouth opening and output these as MIDI control changes. We report our experience with various gesture-to-sound mappings and musical applications, and describe a live performance which used the
doi:10.1007/978-3-319-47214-0_8
fatcat:jyzkc3lzi5av3lk4hckew4ybje