![]() ![]() ![]() ![]() ![]() AI like MusicGen “learns” from existing music to produce similar effects, a fact with which not all artists - or generative AI users - are comfortable. But major ethical and legal issues have yet to be ironed out. Generative music is improving, clearly (see Riffusion, Dance Diffusion and OpenAI’s Jukebox). Here’s the output from MusicGen for “jazzy elevator music”: Its songs are reasonably melodic, at least for basic prompts like “ambient chiptunes music,” and - to my ears - on par (if not slightly better) with the results from Google’s AI music generator, MusicLM. So how does MusicGen perform? Well, I’d say - though certainly not well enough to put human musicians out of a job. The company hasn’t provided the code it used to train the model, but it has made available pre-trained models that anyone with the right hardware - chiefly a GPU with around 16GB of memory - can run. Meta says that MusicGen was trained on 20,000 hours of music, including 10,000 “high-quality” licensed music tracks and 390,000 instrument-only tracks from ShutterStock and Pond5, a large stock media library. We release code (MIT) and models (CC-BY NC) for open research, reproducibility, and for the music community: /h1l4LGzYgf MusicGen can be prompted by both text and melody. We present MusicGen: A simple and controllable music generation model. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |