Concurrent generation of melody and lyrics by recurrent neural networks

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Author: Pietro Bolcato; [2020]

Keywords: ;

Abstract: This work proposes a conditioned recurrent neural network architecture forconcurrent melody and lyrics generation. This is in contrast to methods thatfirst generate music and then lyrics, or vice versa. The system is trained to firstsample a pitch from a distribution, then sample a duration conditioned on thesampled pitch, and finally sample a syllable conditioned on the sampled pitchand duration. The evaluation metrics show the trained system generates musicand text sequences that exhibit some sensible musical and linguistic properties,and as further evaluation, it was applied in a human-AI collaboration for thegeneration of a song for the VPRO AI Song Contest. This highlighted thelimitations of the system: it can be a useful tool to augment the creative processof musicians, but it can not replace them. Finally, a shorter version of thisdissertation has been submitted as a paper for the ISMIR 2020 conference,and it is shown in appendix B.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)