1. open and right-click>save as https://files.catbox.moe/ee1xot.json 2. load this json in kohya sd scripts. It should copy over all the settings 3. adjust your batch size and training steps settings around your hardware. You want them to equal out to 4000. Default is 5 batch size with 800 steps (800x5=4000). I use an rtx 3090, so cards with less than 24gb vram may need to lower the batch size setting and increase the step count accordingly. Lower batch sizes WILL mean longer training times. Sadly this is an inescapable limitation for people with older hardware. 4. adjust your repeats on your dataset based on how many epochs you want (usually 10-20) You can calculate this based on the number of images you have (IE: you want ~10 epochs with 37 images. calc is: total steps / (images x epochs) = repeats. So it would be 4000 / 370 = 10.81 but we round down to 10 repeats which may mean there may be a small 11th epoch at the end. not a big deal. Number of epochs only affects how much is trained before saving that training so with 10 it saves every 10% of training and with 20 epochs it saves every 20% More epochs can give better results as you have extra "checkpoints" to fall back on to ensure the dataset isn't being overtrained. 5. you can toggle "flip augmentation" and "debiased estimation loss" as you see fit. If training an asymettrical character turn off "flip augmentation" for sure. "debiased estimation loss" may give better results on Vpred models, but may possibly be worse for training styles. I have only anecdotal evidence and schizofrenia to back up this information. Now Kohya is set up for training. There are 10,000 guides on how to collect and tag a dataset blablabla I just use: https://github.com/Bionus/imgbrd-grabber and https://github.com/toshiaki1729/dataset-tag-editor-standalone (there is an extension version for a1111/REforge as well) [Tip for the grabber program]: tools>options>save>separate log files> add no name, location type: path and filename, folder: same as images, filename: %md5%.txt , text file content: %all:separator=^, % This will save a text file with the same filename as the image with all of the tags on the image from whatever site (gelbooru, e621, etc) in the standard [1girl, standing, close-up,] format. It saves with underscores which depending on the training model can be good or bad, but those can be batch removed using a dataset editor. Information on training settings and different schedulers is very scarce as to how they function for lora training. Its either baby-talk sub 80iq explainations or star trek technobabble tier. If this guide sucks and every professional lora producer in the thread comes out of the woodwork to critique my method, then I'll steal their ideas and repost muhahaha.