11/10/2020

Learning to Denoise Historical Music

Yunpeng Li, Marco Tagliasacchi, Beat Gfeller, Dominik Roblek

Keywords: Domain knowledge, Machine learning/Artificial intelligence for music, Applications, Digital libraries and archives, MIR fundamentals and methodology, Music signal processing, MIR tasks, Music synthesis and transformation

Abstract: We propose an audio-to-audio generative model that learns to denoise old music recordings. Our model internally converts its input into a time-frequency representation by means of a short-time Fourier transform (STFT), and processes the resulting complex spectrogram using a convolutional neural network. The network is trained with both reconstruction and adversarial objectives on a synthetic noisy music dataset, which is created by mixing clean music with real noise samples extracted from quiet segments of old recordings. We evaluate our method quantitatively on held-out test examples of the synthetic dataset, and qualitatively by human rating on samples of actual historical recordings. Our results show that the proposed method is effective in removing noise, while preserving the quality and details of the original music.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ISMIR 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers