A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Magenta Studio: Augmenting Creativity with Deep Learning in Ableton Live
2019
Zenodo
Our suite of plug-ins for Ableton Live, named Magenta Studio, is available for download at http://g.co/magenta/studio along with its open source implementation. ...
The field of Musical Metacreation (MuMe) has produced impressive results for both autonomous and interactive creativity, recently aided by modern deep learning frameworks. ...
Magenta Studio is based on work by members of the Google Brain team's Magenta project along with contributors to the Magenta and Magenta.js libraries. ...
doi:10.5281/zenodo.4285265
fatcat:yjmxojcx4fbyngmofi3hhai6uu
Designing for a Pluralist and User-Friendly Live Code Language Ecosystem with Sema
2020
Zenodo
With live coding, the real-time composition of music and other art becomes a performance art by centering on the language of the composition itself, the code. ...
We provide an overview and design rationale for the early technical implementation of Sema, including technology stack, architecture, user interface, integration of machine learning, and documentation ...
present design and development goals for the next design iteration of Sema. ...
doi:10.5281/zenodo.3939228
fatcat:yjrulfhbo5eovb5h33zpgwxl2y
A Laptop Ensemble Performance System using Recurrent Neural Networks
2020
Proceedings of the International Conference on New Interfaces for Musical Expression
The final implementation of the system offers performers a mixture of high and low-level controls to influence the shape of sequences of notes output by locally run NN models in real time, also allowing ...
The popularity of applying machine learning techniques in musical domains has created an inherent availability of freely accessible pre-trained neural network (NN) models ready for use in creative applications ...
Acknowledgments We wish to thank the ANU Laptop Ensemble for participating in live performances with our system. ...
doi:10.5281/zenodo.4813481
fatcat:aqpsbtqulbckbprtwn2zqp2loy
MidiNet: A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation
[article]
2017
arXiv
pre-print
Most existing neural network models for music generation use recurrent neural networks. ...
We conduct a user study to compare the melody of eight-bar long generated by MidiNet and by Google's MelodyRNN models, each time using the same priming melody. ...
Ten of them understand basic music theory and have the experience of being an amateur musician, so we considered them as people with musical backgrounds, or professionals for short. ...
arXiv:1703.10847v2
fatcat:3s6bkrupqbd6jb2fg3dzhunn7e
Midinet: A Convolutional Generative Adversarial Network For Symbolic-Domain Music Generation
2017
Zenodo
Ten of them understand basic music theory and have the experience of being an amateur musician, so we considered them as people with musical backgrounds, or professionals for short. ...
models, for people (top row) with musical backgrounds and (bottom) without musical backgrounds. ...
doi:10.5281/zenodo.1415990
fatcat:rkr4w4m2bvdq5jt4le5z26tdiu
A Functional Taxonomy of Music Generation Systems
2017
ACM Computing Surveys
Digital advances have transformed the face of automatic music generation since its beginnings at the dawn of computing. ...
We present a functional taxonomy for music generation systems with reference to existing systems. The taxonomy organizes systems according to the purposes for which they were designed. ...
This could lead to real-life practical applications such as real-time music generation for games, and background music for film and video. ...
doi:10.1145/3108242
fatcat:wcp2p3mu4fgndclqrwlcwzkyoa
A Comprehensive Survey on Deep Music Generation: Multi-level Representations, Algorithms, Evaluations, and Future Directions
[article]
2020
arXiv
pre-print
This paper attempts to provide an overview of various composition tasks under different music generation levels, covering most of the currently popular music generation tasks using deep learning. ...
Previous surveys have explored the network models employed in the field of automatic music generation. ...
This may promote practical applications in real life, such as real-time music generation of games and automatic generation of background music of movies and videos; more importantly, strengthen the interaction ...
arXiv:2011.06801v1
fatcat:cixou3d2jzertlcpb7kb5x5ery
Algorithmic interactive music generation in videogames
2020
SoundEffects
Some of them are complemented with rules and are assigned to sections with low emotional requirements, but support for real-time interaction in gameplay situations, although desirable, is rarely found.While ...
Finally, I propose a compositional tool design based in modular instances of algorithmic music generation, featuring stylistic interactive control in connection with an audio engine rendering system. ...
possibilities, challenges, limits, and techniques of automatic music composition. ...
doi:10.7146/se.v9i1.118245
fatcat:yoqonmlm5ncvdgxb2obg7iejwm
Learning to Groove with Inverse Sequence Transformations
[article]
2019
arXiv
pre-print
Focusing on the case of drum set players, we create and release a new dataset for this purpose, containing over 13 hours of recordings by professional drummers aligned with fine-grained timing and dynamics ...
We also explore some of the creative potential of these models, including demonstrating improvements on state-of-the-art methods for Humanization (instantiating a performance from a musical score). ...
Because almost anyone can tap a rhythm regardless of their level of musical background or training, this input modality may be more accessible than musical notation for those who would like to express ...
arXiv:1905.06118v2
fatcat:gdn5hv6zbjb3tf5qp4dy4jn53a
MorpheuS: generating structured music with constrained patterns and tension
2017
IEEE Transactions on Affective Computing
Yet, they still face an important challenge, that of long-term structure, which is key to conveying a sense of musical coherence. ...
A mathematical model for tonal tension quantifies the tension profile and state-of-the-art pattern detection algorithms extract repeated patterns in a template piece. ...
Casella and Paiva [52] created MAgentA (not to be confused with Google's music generation project Magenta), an abstract framework for a video game background music generation that aims to create "film-like ...
doi:10.1109/taffc.2017.2737984
fatcat:3ewbbbh6r5elvkmtwr2aqn5uga
Music Generation by Deep Learning - Challenges and Directions
[article]
2017
arXiv
pre-print
The motivation is in using the capacity of deep learning architectures and training techniques to automatically learn musical styles from arbitrary musical corpora and then to generate samples from the ...
In addition to traditional tasks such as prediction, classification and translation, deep learning is receiving growing attention as an approach for music generation, as witnessed by recent research groups ...
Acknowledgements This paper is based on the first half of the "Machine-Learning for Symbolic Music Generation" tutorial given at the 18th International Society for Music Information Retrieval Conference ...
arXiv:1712.04371v1
fatcat:23nug7n6qzfbllcjnfs5nerbp4
This time with feeling: learning expressive musical performance
2018
Neural computing & applications (Print)
timing and dynamics. ...
We consider the significance and qualities of the dataset needed for this. ...
Acknowledgements We gratefully acknowledge the members of the Magenta team at Google Research for numerous discussions. We thank the reviewers for helpful comments. ...
doi:10.1007/s00521-018-3758-9
fatcat:ttyik6g6ubev7oari3dfdkzewy
Machine Learning for Computational Creativity: VST Synthesizer Programming
2021
Zenodo
Learning to create music with an audio production Virtual Studio Technology (VST) synthesizer through sound design and note composition is a time-consuming process, usually obtained through inefficient ...
After this, an expressive and controllable variational autoencoder for generating MIDI notes that can then be rendered by a synthesizer is built and some of its creative and artistic applications are explored ...
Acknowledgements List of Acronyms Bibliography Acknowledgments I'd like to thank my advisor Professor Koike, the students and staff at the Koike Lab, everyone at Qosmo, and my friends and family for their ...
doi:10.5281/zenodo.6351291
fatcat:tqvuepndzjdnzbf2qv23ryweuq
Deep Learning Techniques for Music Generation – A Survey
[article]
2019
arXiv
pre-print
. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). ...
This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. ...
Composition Types We will see that, from an architectural point of view, various types of composition 88 may be used: • Composition -at least two architectures, of the same type or of different types, ...
arXiv:1709.01620v4
fatcat:hma4znleorfpvh62cpupxu4fq4
This Time with Feeling: Learning Expressive Musical Performance
[article]
2018
arXiv
pre-print
We consider the significance and qualities of the data set needed for this. ...
timing and dynamics. ...
We thank members and visitors at Google Brain and specifically the Magenta team for discussions, including Adam Roberts, Anna Huang, Colin Raffel, Curtis Hawthorne, David Ha, David So, Fred Bertch, George ...
arXiv:1808.03715v1
fatcat:63wxx5d5h5hftgcwo6vsb43ueq
« Previous
Showing results 1 — 15 out of 292 results