Filters








6 Hits in 2.4 sec

DeepBach: a Steerable Model for Bach Chorales Generation [article]

Gaëtan Hadjeres and François Pachet and Frank Nielsen
2017 arXiv   pre-print
We claim that, after being trained on the chorale harmonizations by Johann Sebastian Bach, our model is capable of generating highly convincing chorales in the style of Bach.  ...  Our model is also steerable in the sense that a user can constrain the generation by imposing positional constraints such as notes, rhythms or cadences in the generated score.  ...  Bach while it is only a third for the MLP model.  ... 
arXiv:1612.01010v2 fatcat:e6otgvgx25dmvhaekupb7fhigu

Learning to Generate Music with BachProp [article]

Florian Colombo, Johanni Brea, Wulfram Gerstner
2019 arXiv   pre-print
However, most of the successful models are designed for specific musical structures.  ...  We show that BachProp captures important features of the original datasets better than other models and invite the reader to a qualitative comparison on a large collection of generated songs.  ...  Gerstner, “BachProp: Learning a steerable model for Bach chorales generation,” in to compose music in multiple styles,” arXiv preprint 34th International Conference on Machine Learning  ... 
arXiv:1812.06669v2 fatcat:a2ucc5w4pvebrj46iy2qdalh7m

Machine learning research that matters for music creation: A case study

Bob L. Sturm, Oded Ben-Tal, Úna Monaghan, Nick Collins, Dorien Herremans, Elaine Chew, Gaëtan Hadjeres, Emmanuel Deruty, François Pachet
2018 Journal of New Music Research  
Corresponding author 1 Research applying machine learning to music modeling and generation typically proposes model architectures, training methods and datasets, and gauges system performance using quantitative  ...  Together with practitioners, we develop and use several applications of machine learning for music creation, and present a public concert of the results.  ...  BLS and OBT gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for training folk-rnn.  ... 
doi:10.1080/09298215.2018.1515233 fatcat:fxhtrzpfazd2dnv7y435dwfi4q

Supervised Symbolic Music Style Translation Using Synthetic Data [article]

Ondřej Cífka, Umut Şimşekli, Gaël Richard
2019 arXiv   pre-print
In view of this data generation scheme, we propose an encoder-decoder model for translating symbolic music accompaniments between a number of different styles.  ...  At the core of our approach lies a synthetic data generation scheme which allows us to produce virtually unlimited amounts of aligned data, and hence avoid the above issue.  ...  DeepBach: a steerable model for Bach chorales generation. In 9.  ... 
arXiv:1907.02265v1 fatcat:mlpc54kq45bndml6ds3gibfhvu

Jukebox: A Generative Model for Music [article]

Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever
2020 arXiv   pre-print
We introduce Jukebox, a model that generates music with singing in the raw audio domain.  ...  We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes.  ...  More recent data-driven approaches include DeepBach (Hadjeres et al., 2017) and Coconet (Huang et al., 2017) which use Gibbs sampling to produce notes in the style of Bach chorals, MidiNet (Yang et  ... 
arXiv:2005.00341v1 fatcat:drwspmscbjfknhqdlunbp6spkm

Interfaces for Improvising with a Jazz Melody Generation System

Joe Munday, Ángel Faraldo, Perfecto Herrera
2017 Zenodo  
Computational models of melody can provide expert knowledge to novices, allowing for access to musical creativity without the need for extensive formal training.  ...  Focussing on jazz melodies specifically, two prospective interfaces were developed that allow a performer interact and direct the melody generation in collaboration with the system, allowing for live performance  ...  Similarly, the FlowMachines project by Sony-CSL has also explored the use of neural networks, in this case in the generation of polyphonic chorales in the style of Bach [34] .  ... 
doi:10.5281/zenodo.3770120 fatcat:okxtvyzhfvd2hez4j56nyoweba