11/10/2020

Generating Music with a Self-correcting Non-chronological Autoregressive Model

Wayne Chi, Prachi Kumar, Suri Yaddanapudi, Suresh Rahul, Umut Isik

Keywords: Domain knowledge, Machine learning/Artificial intelligence for music, Applications, Music composition, performance, and production, Representations of music, MIR tasks, Music synthesis and transformation

Abstract: We describe a novel approach for generating music using a self-correcting, non-chronological, autoregressive model. We represent music as a sequence of edit events, each of which denotes either the addition or removal of a note---even a note previously generated by the model. During inference, we generate one edit event at a time using direct ancestral sampling. Our approach allows the model to fix previous mistakes such as incorrectly sampled notes and prevent accumulation of errors which autoregressive models are prone to have. Another benefit of our approach is a finer degree of control during human and AI collaboration as our approach is notewise online. We show through quantitative metrics and human survey evaluation that our approach generates better results than orderless NADE and Gibbs sampling approaches.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ISMIR 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers