Few-Shot Distribution Learning for Music Generation

Learn a generative model in the few-shot learning regime to generate MIDI sequences or lyrics.

Start date: November 2017
Category: Fundamental Research
Contact point: Hugo Larochelle (hugolarochelle@google.com), Chelsea Finn (cbfinn@eecs.berkeley.edu), Sachin Ravi (sachinr@princeton.edu)

Abstract

Few-shot distribution learning refers to the problem of learning a generative model in the few-shot learning regime. We propose to investigate this problem in the context of generating music data, such as lyrics or MIDI sequences, using ideas from recent developments in adaptive language models, few-shot learning and meta-learning. We plan to collect and construct benchmarks for this problem and evaluate various solutions.

Access the full proposal

Resources