A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is
The aim of this project is to model a Spanish rap singer, applying the knowledge about emotional speech prosody to this musical style close to the speech. Pitch and duration are modelled using Hidden Markov Models (HMMs). The context features used in the HMM training will be adapted to the new scenario, taking into account the rhythmic constraints of the rap. From this trained model, performance trajectories will be generated for the Vocaloid synthesizer, allowing it to synthesize rap.doi:10.5281/zenodo.3786476 fatcat:sauoojvxabbcbbbde3rr6nzjn4