Generating Music using an LSTM Network [article]

Nikhil Kotecha, Paul Young
2018 arXiv   pre-print
A model of music needs to have the ability to recall past details and have a clear, coherent understanding of musical structure. Detailed in the paper is a neural network architecture that predicts and generates polyphonic music aligned with musical rules. The probabilistic model presented is a Bi-axial LSTM trained with a kernel reminiscent of a convolutional kernel. When analyzed quantitatively and qualitatively, this approach performs well in composing polyphonic music. Link to the code is provided.
arXiv:1804.07300v1 fatcat:cob5sit2rvdx3o4no6yrdp7c4i