If you’ve been wondering how an open-source AI-based music-matching system works, you’re not alone. OpenAI’s research has paved the way for AI-powered music services. Jukebox is one such service, and it generates lyrics for songs, as well as music metadata, from online sites like LyricWiki and Wikipedia. Its current performance isn’t great, but it’s certainly a step in the right direction.
A music-matching algorithm has successfully produced thousands of songs with a wide variety of styles, ranging from classical to hip-hop. Jukebox can also create new music samples from scratch, rewrite existing songs, and even ‘complete’ a song from a 12-second sample. The program can also produce deepfake-style goofy covers, and can be trained on over 1.2 million songs. The OpenAI Jukebox is built on the LyricWiki data, and is trained on these songs.
OpenAI trained a neural net on a super-compressed audio file to learn how to segment songs, and later on trained other neural networks to convert it to a more realistic audio file. This resulted in a music-generator system known as a ‘Jukebox’. With this AI-powered music-generator, the company hopes to revolutionize the music industry.
The technology behind Jukebox was recently released by OpenAI. Its goal is to automatically generate music samples containing singing. The software can recognize a genre, an artist, and lyrics, and generate a new music sample from scratch. One of the biggest challenges for this kind of system is generating CD-quality music, which requires millions of timesteps. Luckily, OpenAI has a solution to this problem with a new AI model called MuseNet. With the help of MuseNet, a machine can generate music with unified vocals, and even recognize the manner and tone of singers.
Autoencoders use a multi-stage approach to compress long inputs. The process also discards irrelevant information. The Jukebox autoencoder model uses a vector quantisation (VQ-VAE) approach to compress long context audio. The algorithm then uses a loss function to preserve maximum musical information. The result is a music directory created by an OpenAI Jukebox model.
The OpenAI team used raw audio to train their model, and then returned it with raw audio in response. To create symbolic music, OpenAI researchers used convolutional neural networks. They then used a transformer to create compressed audio, which was then upsampled and played back to raw audio. The Jukebox can also recognize lyrics and vocal parts, which would have been impossible with traditional AI. The OpenAI Jukebox is an excellent example of this kind of artificial intelligence.
For the extra-lyrical content, OpenAI created an encoder for the Jukebox. It then appends a query-using layer to the music layer. This layer receives keys and values and enables the Jukebox to gain the appropriate lyrics sequence. The top-level position has five billion parameters, and the model was trained on 512 NVIDIA V100 GPUs for four weeks.