Meta Unveils MusicGen AI Tool

MENLO PARK, CA – Meta has unveiled its latest artificial intelligence (AI) innovation: MusicGen. Felix Kreuk, an AI research engineer at Meta, announced this via a Twitter thread. A sort of ChatGPT for tunes, the MusicGen AI tool is designed to transform text prompts into audio recordings using advanced machine-learning techniques. Yet, as Meta takes music to a whole new level, copyright holders are concerned about the effects AI could have on licensing. 

MusicGen AI Tool

According to Meta, MusicGen has been trained on an impressive dataset of more than 20,000 hours of music. The extensive training data includes 10,000 hours of licensed songs, which the company claims are high quality, as well as 390,000 instrumental tracks. 

MusicGen was released as an open-source software on GitHub, and a demo version is available for users to try its four API versions (small, medium, melody, and large) via Hugging Face. Currently, MusicGen can only be operated using a GPU, and according to the product README, Meta “[recommends] 16GB of memory, but smaller GPUs will be able to generate short sequences, or longer sequences with the small model.”

With MusicGen, users can provide specific prompts such as “pop dance track with catchy melodies, tropical percussion, and upbeat rhythms, perfect for the beach” or “acoustic folk song to play during road trips.” 

Based on these prompts, MusicGen generates short clips of music that align with the given descriptions. It can even be guided by references to particular eras or songs, allowing for a more tailored music generation experience. Moreover, Meta has confirmed that MusicGen is not limited to creating short clips; it can generate longer tracks as well. 

Music Industry Concerns

Meta assures legal agreements with the right holders cover all the music used to train MusicGen. However, the sources of the data are not industry giants like Universal Music Group or popular artists. Instead, the training set consists of media libraries such as ShutterStock and Pond5. 

Researchers at Meta emphasize the importance of open research and equal access to these models. They recognize that generative models can potentially create unfair competition for artists, and they hope that through the development of more advanced controls, such as the melody conditioning feature, these models can become helpful for both music enthusiasts and professional musicians and songwriters.

AI in Music is Growing

MusiGen’s open-source release further contributes to the growing number of AI-based music models available to the public. Alphabet, the parent company of Google, recently introduced its MusicLM, trained on approximately 280,000 hours of material from the Free Music Archive, while OpenAI has previously developed MusNet and Jukebox.

Although these AI music models are currently just research projects rather than commercial products, the fact that companies like Alphabet and Meta are actively working on them demonstrates the technology’s progress. 

As the music industry grapples with the implications for rights holders, artists, and copyright laws, it will be crucial to monitor the development and application of these systems by established companies and emerging startups.

Jason Rowlett

Jason is a Web3 writer and podcaster. He hosts the BCCN3 Talk podcast and YouTube channel and has interviewed several industry leaders at global Web3 events. An active crypto investor, Jason is a HODLer and advocate for the DeFi industry. He lives in Austin, Texas, where he rows competitively.

Previous
Previous

SWIPEBY Gives Small Business Competitive Advantage With AI

Next
Next

Verge, Vox, Release New Report on AI Consumer Attitudes