SUBMOTIVE AI – Guide + files

Viewing 16 reply threads
  • Author
    Posts
    • #386689
      ,GENIE HQ
      Keymaster

      You can download the examples of the AI generated bass sounds Submotive made in this tutorial this month
      Get them along with the guide here —> https://www.dropbox.com/scl/fi/x7zvh3ymwf8uk6lu50ols/AI-GEN-BASS.zip?rlkey=m6nqq158exxf1vagxkjefsnu1&dl=0

      For ease the links are all here:

      SUBMOTIVE’S QUICK REFERENCE
      “CREATING NEW SOUNDS WITH AI”

      1. Select samples, make sure they are all .wav and 16bit 44.1 or 16bit 48k, doesn’t matter which, provided they are all the same formats. 75 minimum for better results. 200+ for even better.

      2. Upload all samples to Google Drive

      3. Chunk samples using the chunker sheet https://colab.research.google.com/drive/1wCjjFir4vkWSTU3DwVgCvi5qGtaNnGb_

      4. Save chunked samples to Google Drive

      5. Download this checkpoint as your step 1 – then upload to your Google Drive.
      https://model-server.zqevans2.workers.dev/jmann-small-190k.ckpt

      6. Run fine-tune Dance Diffusion Sheet. This will tell the AI model what to train on and where to save the models/ checkpoints. https://colab.research.google.com/github/Harmonai-org/sample-generator/blob/main/Finetune_Dance_Diffusion.ipynb#scrollTo=Q0XrSOHEmch

      7. Run Dance Diffusion Sheet to start creating sounds https://colab.research.google.com/github/Harmonai-org/sample-generator/blob/main/Dance_Diffusion.ipynb#scrollTo=JHsHQcc6rHu7

    • #386834
      ,Harry
      Moderator

      Hi all 👋

      Resubbed for the month to check this out, and been experimenting a bit the past few days. I’m sure I’ll have questions as I experiment more, but a couple things I ran in to already and wanted to point out, in case anyone else runs in to something that stops them. I’m not a programmer, so I’m coming in this very inexperienced, but determined to get it to work.

      In general, for the Install Dependencies on the sheets, where you get a Restart Session prompt, yes you’ll have to hit Restart Session, but I found I need to wait a few seconds after that prompt before clicking on it, so it does what is supposed to do. At first I was hitting it pretty soon after it popped up, and it would fail. Found that if I just took a breath, waited a few seconds, it did what it was supposed to do. Also there is a Restart Session button in the code, I usually hit that too.

      For step 6, the Finetune Dance Diffusion sheet, there is a point in the Train section where you have to manually input some of the paths in the code. I was running in to an issue where it would not get it to run, even after manually entering paths. Then I started trying things, and one thing is there is a line that says:

      $save_wandb_str \

      Take that out, then run the code again and you’ll be fine, from what I’ve seen.

      I ended up buying 100 compute units for the training process, and with those it was about every 45 minutes that a new checkpoint (ckpt) was generated. Also, before the chunking process, I would build up a folder on my computer, and then run all the samples through Adobe Auditions batch process to get them to 44.1k 16-bit, in case there were any sounds that were outside of those rates, i.e. lots of 24-bit, the occasional 48k.

      And thanks to James Submotive for going through all this, found it very interesting.

      "Knowledge kept is knowledge lost." - Bobbito Garcia

    • #387001
      ,JOPPA
      Participant

      It’s great to see Harry back here for a bit hope all is well with you

      • #388194
        ,C8CBE
        Participant

        Where did you go to buy the 100 compute units mate? Been looking but can’t seem to find where that is

      • #388205
        ,C8CBE
        Participant

        Haha ignore this. Replied to the wrong person and I have now worked it out anyway…. It’s been a LONG day haha

    • #387065
      ,Harry
      Moderator

      It’s great to see Harry back here for a bit hope all is well with you

      Thanks! Working on a lot of music projects, some solo stuff and some collabs with various friends. Have had some cool releases over the past year, and more lined up.

      "Knowledge kept is knowledge lost." - Bobbito Garcia

    • #387210
      ,GENIE HQ
      Keymaster

      Agreed, Joppa! Nice to see a new Harry post!
      Thanks for the sharing of your experiences with it H.
      Yes it’s a much more demanding tutorial this month, but we hope people stick it out and get their heads around it as the potentials are incredible.

    • #387232
      ,Harry
      Moderator

      Agreed, Joppa! Nice to see a new Harry post!
      Thanks for the sharing of your experiences with it H.
      Yes it’s a much more demanding tutorial this month, but we hope people stick it out and get their heads around it as the potentials are incredible.

      Right now it feels like a new approach to resampling, and not just giving a prompt to an existing AI platform. The training is interesting to me, already thinking about things to try, like creating a model based on samples I like, but then creating a chunked folder of my own sounds, then running the model cpkt on those sounds to hear what happens. Already did training on one set of sounds I like, and then using that model on a chunked folder of other sounds, to hear what it creates.

      "Knowledge kept is knowledge lost." - Bobbito Garcia

      • #387254
        ,GENIE HQ
        Keymaster

        Yes the options are so exciting. And It’s cool how it feels somehow very clunky and primitive to operate at this stage, but what’s actuallly happening is extremely complex. Feels very cutting edge at the moment that’s for sure.
        We are just on the upward shoulder of the bell curve in terms of how mindblowing sound is going to be.

    • #387357
      ,Jarrod
      Participant

      When dl and saving the model, do I extract the zip and upload the files to drive, or upload the zip itself ? Thanks for any advice

    • #387436
      ,Harry
      Moderator

      When dl and saving the model, do I extract the zip and upload the files to drive, or upload the zip itself ? Thanks for any advice

      The jmann-small-190k.ckpt should download just as that, about a 3 gig file. Then upload the .ckpt file to your Google Drive. When you run Finetune_Dance_Diffusion.ipynb, on the form for training you have to point to a .ckpt file. I have mine set up like this:

      "Knowledge kept is knowledge lost." - Bobbito Garcia

    • #387437
      ,Harry
      Moderator

      Also, not sure how I’m a moderator again, maybe old role came in to play? LOL

      "Knowledge kept is knowledge lost." - Bobbito Garcia

      • #387439
        ,Jarrod
        Participant

        Cheers mate
        Yeah, I managed to upload the file to drive and create the signal path to the file.
        The only concern was that the file was showing up as Jmann-small-190k.ckpt.zip
        So I tried extracting the zip file , however the data files inside are not combined. I then uploaded the entire zip file and renamed it without the .zip on the end.
        Now this May all be because I was being lazy and trying to run all this on an I pad 😂 and not my studio Mac, which entails walking lol.
        So, I’ll try again in the studio and hopefully have more joy.
        Thanks anyway, love the thought of the experiments to be had
        J

    • #388028
      ,C8CBE
      Participant

      Easy lads,

      First time posting! Hope everyone is good

      I am proper stuck on this…. On the training part. I have followed everything Submotive said to do yet I am still getting an error that reads –

      –name “dd-SG-Machine” \
      –training-dir “$TRAININ/content/drive/MyDrive/TRAIN/SGchunkedAIBasses” \
      –sample-size $65536 \
      –accum-batches $4 \
      –sample-rate $44100 \
      –batch-size $2 \
      –demo-every $250 \
      –checkpoint-every $500 \
      –num-workers 2 \
      –num-gpus 1 \
      $random_crop_str \
      –save-path “/content/drive/MyDrive/TRAIN/SG-Sample-Machine”

      /content/sample-generator
      usage: train_uncond.py [-h] [–config-file CONFIG_FILE] [–wandb-config WANDB_CONFIG]
      [–name NAME] [–training-dir TRAINING_DIR] [–batch-size BATCH_SIZE]
      [–num-gpus NUM_GPUS] [–num-nodes NUM_NODES] [–num-workers NUM_WORKERS]
      [–sample-size SAMPLE_SIZE] [–demo-every DEMO_EVERY]
      [–demo-steps DEMO_STEPS] [–num-demos NUM_DEMOS] [–ema-decay EMA_DECAY]
      [–seed SEED] [–accum-batches ACCUM_BATCHES] [–sample-rate SAMPLE_RATE]
      [–checkpoint-every CHECKPOINT_EVERY] [–latent-dim LATENT_DIM]
      [–cache-training-data [CACHE_TRAINING_DATA]] [–random-crop [RANDOM_CROP]]
      [–ckpt-path CKPT_PATH] [–save-path SAVE_PATH]
      [–start-method START_METHOD] [–save-wandb SAVE_WANDB]
      train_uncond.py: error: argument –accum-batches: expected one argument

      So the issue is accum-batches – I have tried to change that value both lower (2) and higher (6) but still the same problem

      Anyone else having this issue or know a fix?

      • #388029
        ,Harry
        Moderator

        I think your issue might be this line:

        –training-dir “$TRAININ/content/drive/MyDrive/TRAIN/SGchunkedAIBasses” \

        Specifically the $TRAININ

        Here’s the code for the model I’m starting tonight:

        –name “dd-2024-03 MAR_chunked” \
        –training-dir “/content/drive/MyDrive/Train/chunkedbass” \
        –sample-size $65536 \
        –accum-batches $4 \
        –sample-rate $44100 \
        –batch-size $2 \
        –demo-every $250 \
        –checkpoint-every $500 \
        –num-workers 2 \
        –num-gpus 1 \
        $random_crop_str \
        –save-path “/content/drive/MyDrive/+ AI – Audio – March 2024/AI/models/2024-03 MAR_chunked”

        "Knowledge kept is knowledge lost." - Bobbito Garcia

      • #388086
        ,saltbeef
        Participant

        Try changing any $ symbols to ” instead, this worked for me. For example instead of $4 do “4”

    • #388084
      ,marek@newbreakscom
      Participant

      Hello, seems like everyone is running into different problems but we all stuck at the training section. Seems like Dance Diffussion suffering still problems… My Problem is this and I dont have a Idea what to do:

      /content/sample-generator
      Using device: cpu
      Random crop: False
      Traceback (most recent call last):
      File “/content/sample-generator/train_uncond.py”, line 228, in <module>
      main()
      File “/content/sample-generator/train_uncond.py”, line 199, in main
      train_dl = data.DataLoader(train_set, args.batch_size, shuffle=True,
      File “/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py”, line 350, in __init__
      sampler = RandomSampler(dataset, generator=generator) # type: ignore[arg-type]
      File “/usr/local/lib/python3.10/dist-packages/torch/utils/data/sampler.py”, line 143, in __init__
      raise ValueError(f”num_samples should be a positive integer value, but got num_samples={self.num_samples}”)
      ValueError: num_samples should be a positive integer value, but got num_samples=0

    • #388085
      ,saltbeef
      Participant

      I’m also having a problem here, last night I got it to run and made 1 file of basses, this morning I’m having trouble with the GPU (step 1 in Dance Diffusion Fine Tune) saying: FileNotFoundError: [Errno 2] No such file or directory: ‘nvidia-smi’

      Then when I try to run the train mmodule it says: lightning_fabric.utilities.exceptions.MisconfigurationException: No supported gpu backend found!

      I’ve not changed anything so not sure why it’s not working now. Ideas anyone?

      • #388105
        ,marek@newbreakscom
        Participant

        Hello @saltbeef it seems like this gpu error is a global issue. Im sure that it was ok when I was running the gpu test yesterdays for the first time. After that I had that “nvidia-smi” problem. Seems like we have to wait till they fix it? Does anyone else has the same problems?

      • #391986
        ,Kosmical
        Participant

        To all people who are still having the No such file or directory: ‘nvidia-smi’ issue at the Check GPU Status step:

        Here’s what I did to fix this.

        1. First I bought the Colab Pro Subscription (not sure if that’s necessary so maybe skip this first)

        2. In the Finetune_Dance_Diffusion sheet go to “Additional connection options”. That’s a dropdown menu underneath your Google Account Button on the top right. It has a small arrow icon-button facing down.

        3. Select “Disconnect and delete runtime”.

        4. Reload the page.

        5. Run the “Check GPU Status” step again and you should not get errors anymore.

    • #388116
      ,marek@newbreakscom
      Participant

      Ok I give up on this… too many errors and no idea what to do! lets hope for a new version of this AI soon! Im done for now it not even makes fun like tweaking a synth!

    • #388193
      ,C8CBE
      Participant

      Right guys I have successfully managed to get it going. I didn’t do anything different settings wise apart from I did change browsers from Safari to Edge (you can’t use Chrome or I should say I couldn’t because it was trying to log me into something really random that I’m guessing is possibly for some sort of moderator or something!)

      I’m going to leave it going for a few hours and repost back if I run into any errors.

      Thanks to @,Harry for the reply and @Saltbreef for your reply.

      I do hear you though Marek. It is very clunky and annoying haha. If it’s causing too much procrastination it’s definitely best to get in the DAW and crack on!

      • #388267
        ,saltbeef
        Participant

        I tried a different browser, still get the GPU error :/

      • #388268
        ,C8CBE
        Participant

        Easy mate,

        I just had a quick search on Google and found lots of people with the same issues. One forum seems to think if you upgrade to Pro (it’s £10 a month) you won’t have that issue.

        I also upgraded and I’ve had my model running for over 9 hours now

        Might be worth a try and it’s only £10 but I do get it could be a waste if it doesn’t work. Submotive mentioned a discord that had a community of people who are doing this so it might be worth re- listening to the video (it’s in part 2) and joining that and ask the question.

        Hope that helps g

      • #388304
        ,saltbeef
        Participant

        Upgrade what to pro?

      • #388451
        ,marek@newbreakscom
        Participant

        Same here, the GPU issues comes up if there is a high load on colab. It doesnt gurantee you that your process is going through! You need a bit luck. I started it yesterday and it came up with two checkpoints at least before it shut down. Today I bought ressources for 10 Euro and its up running without problems! So please upgrade colab (right side on top) good luck!

      • #388465
        ,C8CBE
        Participant

        Yeah mine has been running over 24 hours now without any issues. Hopefully the upgrade to pro should sort a lot of the issues people are having out

        side note – not sure if there is a particular reason to why Submotive did it the way he suggests in the tutorial but you don’t actually have to generate the samples via the last link. If you go to your account in Weights and Biases not only can you preview what’s being created but you can also download it in wav 44100 – 16 bit from there. I found that to be not only easier but also I had much better results rather than generating a 20/30 second loop or one shots

      • #388469
        ,marek@newbreakscom
        Participant

        Alright, yeah you can download it at W&B but if you just wanna use a trained model, without doing the whole chunk and finetune part the last step should be necessary. For example you wanna give the trained model to a friend. In the Dance Diffusion Generator there coming up new errors I cant fix right know. Im connected, I have enough ressources but to point I can not execute without running into error:

        Imports:
        ————————————————————————-

        ModuleNotFoundError Traceback (most recent call last)

        <ipython-input-1-f50d6bef0516> in <cell line: 12>()
        10 import gc
        11
        —> 12 from diffusion import sampling
        13 import torch
        14 from torch import optim, nn

        ModuleNotFoundError: No module named ‘diffusion’

        —————————————————————————
        NOTE: If your import is failing due to a missing package, you can
        manually install dependencies using either !pip or !apt.

        To view examples of installing some common dependencies, click the
        “Open Examples” button below.
        —————————————————————————

        And the actual Sample Generator:

        —————————————————————————

        ModuleNotFoundError Traceback (most recent call last)

        <ipython-input-4-0d14f6aab1b4> in <cell line: 3>()
        1 from urllib.parse import urlparse
        2 import hashlib
        —-> 3 import k_diffusion as K
        4
        5 #@title Create the model

        ModuleNotFoundError: No module named ‘k_diffusion’

        —————————————————————————
        NOTE: If your import is failing due to a missing package, you can
        manually install dependencies using either !pip or !apt.

        To view examples of installing some common dependencies, click the
        “Open Examples” button below.
        —————————————————————————

      • #390165
        ,JAMIE RUTHERFORD
        Participant

        Having exactly the same issues as discussed here by yourself mate. No mention of this in the AI bass part II video so don’t know where to go from here? Any help by you guys would be much appreciated

      • #388658
        ,C8CBE
        Participant

        Google Colab Pro mate. If you take out a subscription (£10 a month one is fine) it should stop the GPU errors you’re getting

    • #389781
      ,Yo
      Participant

      Submotive should’ve spend the 45 minutes on going more in depth into bass design than wasting time on that AI crap.

    • #390090
      ,marek@newbreakscom
      Participant

      I have created a pad machine and its good to take time to train the model. The other thing I found out about pads is that it makes sense to use the Dance Diffusion (the last step) cause you might end up in getting too short pad sounds you can adjust the length so. For this month I was running out of ressources. NExt month im going to train the bass and pads a little more! Good luck everyone 🙂

    • #390407
      ,marek@newbreakscom
      Participant

      Today I trained the AI on Snares. It only took like 4 checkpoints to get a proper result. It went faster than with the Bass I guess. I ll use those results in the next tune.

Viewing 16 reply threads
  • You must be logged in to reply to this topic.

Sign In

Sign into your account below and get your hands on April's amazing content.

Forgot Password?

Find out more about our service:

Free Membership Full Membership Your Basket (0 items - 0.00)