Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Artist: Bahnbahn
#1

About artist:
You may know it, as it is active for a quite some time. Began from a pretty far away times, when Tumblr was blooming, and FA was awesome. Bahnvoyage, Bonbonpony, Nsfwbonbon, BourbonNSFW - yeah, this artist had many names for it's career. Oldest items get back to as far as year 2010. But even at the beginning of it's path, it primarily stuck to ponies, then to pregnancy kink. With time, it somewhat perfected the style, but it haven't changed that much - soft lines, with nice, gradiently colorings.

Bahn is, like many artists, somewhat depressed about its life. Goldfishes as its main hobby, and pretty much nothing beyond that. Empty room, nothing else to treasure. Wearing masks and masks, blinded by it's own flock. But one thing I know for sure - this artists treasures troubles very much, as only troubled characters have strong will and desires. I see no reason to deny those wishes.

TLDR:
Pretty good artist when we talking about pregnancy-themed pictures. Usually prefers anthro, but from time to time you can find some feral stuff.

How to Invoke?
Every model trained on e621 website might have Bahn as active artist. Look for Fluffusion, or FluffyRock to see examples.

Datasets:
LoRAs:
Reply
#2

One of the images that was made is this one:
https://nl.ib.metapix.net/files/full/445...f94408.png

But it was made way back in time, in May 2023. I knew nothing about LoCONs, and models in question were quite undertrained, not to mention, quite absent.

There was, actually, image that at least partly belongs to BB, too.
https://nl.ib.metapix.net/files/full/465...ler_t1.png

This one is more recent, but it was used by mixing BB with someone else.
Reply
#3

So, Bahn will be my experimental soul to sharpen soulstealing skills on.

I will go step by step. The thing is - I already did some tests off-camera, so this one is tweaked, but is still genuine first steps.

1. Let us determine how we measure success of training
We will create a two-way measuring row. The middle will correspond to 'bahnbahn style as initially trained network thinks it should be'. The left side will correspond to 'style with an SKS-style training and deeper tag dropout rate (20%)'. Right side will be 'SKS-less and only 10% tag dropout'.
Then, we will select the best model in both sides to be compared in the 'finals'.

To keep things fair, we will ask model to draw 3 things that were taught to it, and 3 things that was not. Every single prompt will be given without changes to all networks.

The middle prompt will be as follows:
Positive prompt: <lora:STYLE_BAHNBAHN_V1_(type)-000012:1>, solo, zoroark, feral, female, pregnant, lying, on back, looking at viewer, hands on belly, belly blush, navel, outie navel, pussy, pussy juice, fur, detailed fur, smile, open mouth, red eyes, detailed background, cave, sun, mountain, moss, plant
Negative prompt: text, comic, sequence, guide lines, patreon logo, username, signature, watermark
Size: 768x640
Sampler: DPM++ 2S Karras
Steps: 48
CFG: 7
Seed: 42
---
XYZ grid
X parameter - Prompt S/R - <lora:STYLE_BAHNBAHN_V1_(type)-000012:1>,<lora:STYLE_BAHNBAHN_V1_(type)-000011:1>,...,<lora:STYLE_BAHNBAHN_V1_(type)-000001:1>,by bahnbahn,<lora:STYLE_BAHNBAHN_V1SKS_(type)-000001:1> sks,<lora:STYLE_BAHNBAHN_V1SKS_(type)-000002:1> sks,...,<lora:STYLE_BAHNBAHN_V1SKS_(type)-000012:1> sks
Y parameter - Prompt S/R - "zoroark, feral","mew, feral","princess luna \(mlp\), feral","typhlosion, feral","espeon, feral","owl, anthro"

Bahn already drew Princess Luna, Mew and Zoroark, but has never even touched Typhlosion or Espeon, and never made anthro owls, so test will be pretty good.

2. Start comparing different items and find best trainings of dataset
I use same 2000-images dataset for each training, so there will be next to no difference in it, but there will be in various training methods.

Comparsion A: Standartized LoCON
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-4 / 5e-5
* Optimizer = AdamW8bit
Comparsion Image
If we look at SKS-less (left side), then we see that at high enough iterations, images start to become much rougher and color shifts too much. From left side, probably, the best one is Epoch 5 - at least it kept best variation of color. It differs from the 'middle' a lot, most of the image is glossy/shiny for some reason.
SKS-full variation (right side) kept much more interesting style even at 12th epoch! However, it also makes images somewhat overtrained (notice pose of mew, luna). One of the best epochs in terms of balance between saturation and style is epoch 8, with items down to epoch 4 being viable options!

Comparsion B: Standartized LoHA
* Network Dim/Alpha = 8/4
* Conv Dim/Alpha = 4/1
* Learning Rate = 1e-4 / 5e-5
* Optimizer = AdamW8bit
Comparsion Image
And again, going left is where images seem to be damaged the closer they get to the left. Lot of oversaturated colors and ripple effects. As down as 4th epoch, and it still looks horrible. Let's just say that 4th epoch is rather best looking of them all - best of the worst, so to say.
Going right, we again see tendency to keep better quality. Some images are oversaturated a lot, though. I'm going to say that 4th epoch is a go, and probably down to 3rd as well.

Comparsion C: Expanded LoHA
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-4 / 5e-5
* Optimizer = AdamW8bit
Comparsion Image
Well, after we gave more info capacity to LOHA, we got pretty expected thing - on the left side, Mew and Luna got to same pose they were trained on. Not to mention, extremely rough image itself. If I would have to pick one, it would be 3nd epoch.
Right side suffers the same problem - overly many dimensions gave it literally an ability to copy its training data, so even here we got to somewhere near 4th to 2th epoch. I guess, 4th is really the way to go.

Comparsion D: Standartized LoCON / Slower training
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = AdamW8bit
Comparsion Image
Compared to A, slowed learning had some effect - at least, image wasn't so fast to crash and burn, but we can still see colors separating near high epochs. This time, however, 6th epoch looks more duable, and down to epoch 5 is still somewhat good.
Right side, tho, is a masterpiece. Even 12th epoch seemed so well-trained, down to 4th, and we could possibly train it further without fear. Probably we need to find, what exactly caused this - an SKS tag or higher dropout rate.

Comparsion E: Standartized LoHA / Slower training
* Network Dim/Alpha = 8/4
* Conv Dim/Alpha = 4/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = AdamW8bit
Comparsion Image
Left side, again, crash and burn, even with lesser amount of training. Epoch 4 seems most balanced of them all, but hardly.
However, right side, again, takes us by surprise. Perfect details on 12th epoch, and it begins to be awesome from 4th.

Comparsion F: Expanded LoHA / Slower training
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = AdamW8bit
Comparsion Image
Left side as always, gets broken and thrown, and overlearns. 2nd epoch is the best one.
Right side gives more interesting results, keeping it's cool even at 10th epoch and all the way down to 3rd.

Comparsion G: Standartized LoCON / Slowest training
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-6 / 5e-7
* Optimizer = AdamW8bit
Comparsion Image
I gave this level a 100 times slower learning. Good news - it haven't really broke at 12th epoch. Bad news - it haven't really learned the style.
Right side is essentially the same. Haven't broken, but haven't learned. 12th epoch.

Comparsion H: Standartized LoHA / Slowest training
* Network Dim/Alpha = 8/4
* Conv Dim/Alpha = 4/1
* Learning Rate = 1e-6 / 5e-7
* Optimizer = AdamW8bit
Comparsion Image
Pretty much same stuff as G. 12th is good, but haven't got far at all in either direction, so both are 12th epoch

Comparsion I: Expanded LoHA / Slowest training
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-6 / 5e-7
* Optimizer = AdamW8bit
Comparsion Image
And again, pretty much same stuff as G. 12th is good, but haven't got far at all in either direction, so both are 12th epoch

Comparsion J: Native Full Training / AdamW / Small LR
* Learning Rate = 1e-6 / 5e-7
* Optimizer = AdamW8bit
Comparsion Image
High epochs result in a lot of rough dots, winch is not really what the style is. Something like 5th epoch seems the best SKSless can do.
SKSfull is somewhat interesting. I'll leave my choice at 7th epoch

Comparsion K: Native Full Training / Lion / Small LR
* Learning Rate = 1e-6 / 5e-7
* Optimizer = Lion8bit
Comparsion Image
Even when applying as low training as 0.000001, it still managed to get to copying and destroying itself at 6th epoch. Can carefully say that 4rd epoch is the best of them all.
Completely same with SKS. But there, I would more likely to say 2nd epoch. This is the first time where SKS lost to non-SKS.

Comparsion L: Native Full Training / AdamW / Smallest LR
* Learning Rate = 1e-7 / 5e-8
* Optimizer = AdamW8bit
Comparsion Image
Almost the same as J, but very undertrained. 8th epoch is fine enough...
Same goes for SKS - but there image looks somewhat better trained - 8th epoch is fine...

Comparsion M: Native Full Training / Lion / Smallest LR
* Learning Rate = 1e-7 / 5e-8
* Optimizer = Lion8bit
Comparsion Image
I coudln't say it is trained at all. 8th epoch for both reasonable.

2.1 Compare SKS-less with high dropout rate AND SKS-full with small dropout rate
Because I am curious on how both SKS and dropout rate goes with each other, I started another batch. What I hope to find is that SKS means nothing and dropout rate means everything.

Comparsion N: Standartized LoCON
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-4 / 5e-5
* Optimizer = AdamW8bit
Comparsion Image
Left side is good to approximately 5th epoch. What comes after descends into rough, dotty textures.
However, the SKS side comes with much more stability. Even 12th epoch looks stable enough (however, it seems to suffer from overtraining), so I think that 7th epoch is the better one.
So, it does seem that SKS tagging, sadly, IS crucial.

Comparsion O: Standartized LoHA
* Network Dim/Alpha = 8/4
* Conv Dim/Alpha = 4/1
* Learning Rate = 1e-4 / 5e-5
* Optimizer = AdamW8bit
Comparsion Image
Doesn't matter how high dropouts are, but 1e-4 is enough to completely burn images at high epochs. 4th epoch is thebest of them all, yet suffers from overtraining...
Same goes to SKS way, with 4th being more or less best epoch.

Comparsion P: Expanded LoHA
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-4 / 5e-5
* Optimizer = AdamW8bit
Comparsion Image
12th epoch is definitely overtrained, and others are oversaturated, so I was forced to stop at 3rd epoch from the left
Same to the right, actually, so it is 4th from the left.

Comparsion Q: Standartized LoCON / Slower training
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = AdamW8bit
Comparsion Image
Who knows if it is due to high dropout or not, but from the left, 12th epoch seems pretty fine.
However, same applies to the right side too. So, this 12th epoch is fine too.

Comparsion R: Standartized LoHA / Slower training
* Network Dim/Alpha = 8/4
* Conv Dim/Alpha = 4/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = AdamW8bit
Comparsion Image
Left side is good enough on 12th epoch.
Same goes for the right side, 12th epoch.

Comparsion S: Expanded LoHA / Slower training
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = AdamW8bit
Comparsion Image
Left side is overtreined, till, probably, 7th epoch
Somewhat safe to assume that right side is nice at 12th epoch

Comparsion T: Native Full Training / AdamW / Small LR
* Learning Rate = 1e-6 / 5e-7
* Optimizer = AdamW8bit
Comparsion Image
Roughness ends from the left at 4th epoch
Right side went as far as 6th epoch

Comparsion U: Native Full Training / Lion / Small LR
* Learning Rate = 1e-6 / 5e-7
* Optimizer = Lion8bit
Comparsion Image
Left side went hyperoptimized after 2nd epoch, but I'll still choose 4th epoch as main
Kinda same goes for the right side as well, I'll stop it at 4th epoch too.

Comparsion V: Native Full Training / AdamW / Smallest LR
* Learning Rate = 1e-7 / 5e-8
* Optimizer = AdamW8bit
Comparsion Image
Both sides went to 8th epoch no problem

Comparsion W: Native Full Training / Lion / Smallest LR
* Learning Rate = 1e-7 / 5e-8
* Optimizer = Lion8bit
Comparsion Image
Both sides went to 8th epoch no problem
Reply
#4

2.2. Test LION optimizer as well.
Comparsion A-2: Standartized LoCON
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-4 / 5e-5
* Optimizer = Lion8bit
Comparsion Image
LION got quite eager and started to overoptimize on 3rd epoch to the left.
Surprisingly, 3rd epoch on the right got pretty much same result.

Comparsion B-2: Standartized LoHA
* Network Dim/Alpha = 8/4
* Conv Dim/Alpha = 4/1
* Learning Rate = 1e-4 / 5e-5
* Optimizer = Lion8bit
Comparsion Image
LOHA+LION was, in fast, SO successful, it started to completely copy initial pictures from the 3rd epoch! By all means, I am forced to pich 2nd epoch!
And pretty much same goes for SKS as well! 2nd epoch is all we can allow ourselves

Comparsion C-2: Expanded LoHA
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-4 / 5e-5
* Optimizer = Lion8bit
Comparsion Image
LoRA got so powerful, that, in fact, it started to change every single image into its dataset! I cannot select anything but 1st epoch from both sides

Comparsion D-2: Standartized LoCON / Slower training
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = Lion8bit
Comparsion Image
From the left, 4th epoch seems best one
Same could be said about right side - 4th epoch

Comparsion E-2: Standartized LoHA / Slower training
* Network Dim/Alpha = 8/4
* Conv Dim/Alpha = 4/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = Lion8bit
Comparsion Image
From the left side, Epoch 3
From the right, Epoch 4

Comparsion F-2: Expanded LoHA / Slower training
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = Lion8bit
Comparsion Image
2nd from the left
2nd from the right

Comparsion G-2: Standartized LoCON / Slowest training
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-6 / 5e-7
* Optimizer = Lion8bit
Comparsion Image
4th epoch for both sides

Comparsion H-2: Standartized LoHA / Slowest training
* Network Dim/Alpha = 8/4
* Conv Dim/Alpha = 4/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = Lion8bit
Comparsion Image
4th from both sides

Comparsion I-2: Expanded LoHA / Slowest training
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = Lion8bit
Comparsion Image
Exactly 1st epoch on both

And, of course, compare SKS-less with high dropout rate AND SKS-full with small dropout rate

Comparsion J-2: Standartized LoCON
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-4 / 5e-5
* Optimizer = Lion8bit
Comparsion Image
Both sides stopped at 3rd epoch

Comparsion K-2: Standartized LoHA
* Network Dim/Alpha = 8/4
* Conv Dim/Alpha = 4/1
* Learning Rate = 1e-4 / 5e-5
* Optimizer = Lion8bit
Comparsion Image
Exactly 1st epoch on both

Comparsion L-2: Expanded LoHA
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-4 / 5e-5
* Optimizer = Lion8bit
Comparsion Image
Exactly 1st epoch on both

Comparsion M-2: Standartized LoCON / Slower training
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = Lion8bit
Comparsion Image
Both at 5th epoch

Comparsion N-2: Standartized LoHA / Slower training
* Network Dim/Alpha = 8/4
* Conv Dim/Alpha = 4/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = Lion8bit
Comparsion Image
4th epoch from both sides

Comparsion O-2: Expanded LoHA / Slower training
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = Lion8bit
Comparsion Image
1st epoch from both sides

Comparsion P-2: Standartized LoCON / Slowest training
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-6 / 5e-7
* Optimizer = Lion8bit
Comparsion Image
5th epoch from both sides

Comparsion Q-2: Standartized LoHA / Slowest training
* Network Dim/Alpha = 8/4
* Conv Dim/Alpha = 4/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = Lion8bit
Comparsion Image
4th epoch from both sides

Comparsion R-2: Expanded LoHA / Slowest training
* Network Dim/Alpha = 16/8
* Conv Dim/Alpha = 8/1
* Learning Rate = 1e-5 / 5e-6
* Optimizer = Lion8bit
Comparsion Image
1st epoch from both sides
Reply
#5

3. Find out best model by making them participate in tournament.
'The Tournament' is simple. I will generate 12 topics - 6 from the learned ones, and 6 from completely new ones, and repeat it with 6 seeds each.
Then I compare each and every lora type with each other - like, only LOCONs, only LOHAs8, only LOHAs16...
In comparsion, I will judge both how close on the style LORA got, and how much broken images it generate.
From each comparsion, up to two best matches receive 60pt lime colored circle in the corner.
Winning model is the one with most circles.
If the winning model is Baseline (built-in model), then all LoRAs is disqualified from tournament.
After all I will reach the situation where I have exactly one Locon, Loha8, Loha16, and Full. They will fight each other then, with different 12 seeds. The winner in the tournament is winner overall, and it's LoRA will be used as example for training better ones.

Tournament will be composed of:
XYZ grid
X parameter - Prompt S/R - (first half of items),by bahnbahn,(second half of items)
Y parameter - Prompt S/R - "zoroark, feral","mew, feral","princess luna \(mlp\), feral","floatzel, feral","dragon, feral","figdet, feral","typhlosion, feral","espeon, feral","owl, anthro","lapras, feral","orca, anthro","quaquaval, feral"
Z parameter - Seed - 142, 242, 342 | 442, 542, 642 (there will be 2 images, since one image would be 350mb)

Of course, Floatzel, Dragon and Fidget were already drawn by Bahn, when no Laprases, or Orcas, or Quaquavals were found.


3.1. Quarter-finals.
Tournament A: Standart LoCONs
Tournament A/1: Normal LR LoCONs
First part (seeds 142, 242, 342) + Second part (seeds 442, 542, 642)
1. STYLE_BAHNBAHN_V1SKS_LOCON16-000005: 60
2. STYLE_BAHNBAHN_V1SKS_LOCON16L-000007: 37
3. Baseline: 27
Result: while basic Bahn is awesome by itself, the winner is: SKS-style, High Dropout Rate, ADAM, 5th epoch.

Tournament A/2: Slow LR LoCONs
First part (seeds 142, 242, 342) + Second part (seeds 442, 542, 642)
1. STYLE_BAHNBAHN_V1SKS_LOCON16S-000012: 69
2. STYLE_BAHNBAHN_V1SKS_LOCON16LS-000012: 55
3. Baseline: 13
Result: Slowing down helped a lot, and while others are oversaturated or overtrained, the winner is: Slow Learning, SKS-style, High Dropout Rate, ADAM, 12th epoch.

Tournament A/3: Slowest LR LoCONs
First part (seeds 142, 242, 342) + Second part (seeds 442, 542, 642)
1. Baseline: 62
... can skip others.
Result: Baseline having a greater quality means that A/3 ends with no victors.

Tournament B: Standart LoHAs
Tournament B/1: Normal LR LoHAs
First part (seeds 142, 242, 342) + Second part (seeds 442, 542, 642)
1. STYLE_BAHNBAHN_V1SKS_LOHA8-000004: 46
1. Baseline: 46
2. STYLE_BAHNBAHN_V1SKS_LOHA8L-000004: 36
Result: Baseline having a similar quality to lora means that B/1 ends with no victors.

Tournament B/2: Slow LR LoHAs
First part (seeds 142, 242, 342) + Second part (seeds 442, 542, 642)
1. STYLE_BAHNBAHN_V1SKS_LOHA8S-000012: 61
2. STYLE_BAHNBAHN_V1SKS_LOHA8LS-000012: 22
3. Baseline: 11
Result: Close call (both S and LS were good!), but Slow Learning, SKS-style, High Dropout Rate, ADAM, 12th epoch - wins.

Tournament B/3: Slowest LR LoHAs
First part (seeds 142, 242, 342) + Second part (seeds 442, 542, 642)
1. STYLE_BAHNBAHN_V1SKS_LOHA8SS-000012: 64
2. Baseline: 52
Result: With barely escaping the foul line, Slowest Learning, SKS-style, High Dropout Rate, ADAM, 12th epoch wins.

Tournament C: Expanded LoHAs
Tournament C/1: Normal LR LoHAs
First part (seeds 142, 242, 342) + Second part (seeds 442, 542, 642)
Result: Everyone disqualified. Everything is so obviously different from baseline, that you don't need any other proofs.

Tournament C/2: Slow LR LoHAs
First part (seeds 142, 242, 342) + Second part (seeds 442, 542, 642)
Result: ...again, everyone disqualified for the same reason!

Tournament C/3: Slowest LR LoHAs
First part (seeds 142, 242, 342) + Second part (seeds 442, 542, 642)
Result: This tournament is by default won by Slowest Learning, SKS-style, High Dropout Rate, ADAM, 12th epoch wins.

Tournament D: Native Training - Slow Learners
First part (seeds 142, 242, 342) + Second part (seeds 442, 542, 642)
Result: I honestly started to check those, but in after first seed I realized that everything would simply fail. Everyone disqualified.

Tournament E: Native Training - Slowest Learners
First part (seeds 142, 242, 342) + Second part (seeds 442, 542, 642)
Result: Same as D. One glance was enough to see imperfections, and therefore, everyone was disqualified.


3.2. Semi-finals.
I have changed SEED parameter to compare different situations. LORAs must be versatile.
I also tweaked Prompts to match better themes. Fidgets/Nimbats are for some reason not really recognized by network, so I used 'anthro wywern' instead. I also changed Espeon to Vaporeon, Owl to Braixen, and absent Quaquaval to Blaziken - as far as I can remember, none were drawn by Bahn, and none were in dataset.
Finally, this time, there can be only 1 item selected per choice.
Y parameter - Prompt S/R - "zoroark, feral","mew, feral","princess luna \(mlp\), feral","floatzel, feral","dragon, feral","wyvern, anthro","typhlosion, feral","vaporeon, feral","braixen, feral","lapras, feral","orca, anthro","blaziken, feral"
Z parameter - Seed - 1042, 2042, 3042, 4042, 5042, 6042

Tournament A/T: Best of LoCONs
The Tournament A/T
Winner of A/1 - STYLE_BAHNBAHN_V1SKS_LOCON16-000005: 6
Winner of A/2 - STYLE_BAHNBAHN_V1SKS_LOCON16S-000012: 59
Baseline: 7
Result: Best of LOCONs - Slow Learning, SKS-style, High Dropout Rate, ADAM, 12th epoch.

Tournament B/T: Best of LoHAs (standart)
The Tournament B/T
Winner of B/2 - STYLE_BAHNBAHN_V1SKS_LOHA8S-000012: 66
Winner of B/3 - STYLE_BAHNBAHN_V1SKS_LOHA8SS-000012: 4
Baseline: 2
Result: Best of LOHAs - Slow Learning, SKS-style, High Dropout Rate, ADAM, 12th epoch.

Tournament C/T: Best of LoHAs (expanded)
Winner by default (no competitors!) - STYLE_BAHNBAHN_V1SKS_LOHA16SS-000012


3.3. Finals.
I have changed SEED parameter again and tweaked Y parameter once more.
Changed Wyvern to Dragon; added Deer (Deerling is a deer, after all) and Lucario, also Goodra (feral, not in dataset) and Night Fury (also not in dataset or anywhere for that matter).
Y parameter - Prompt S/R - "zoroark, feral","mew, feral","princess luna \(mlp\), feral","floatzel, feral","dragon, feral","dragon, anthro","deer, feral","lucario, feral","typhlosion, feral","vaporeon, feral","braixen, feral","lapras, feral","orca, anthro","blaziken, feral","goodra, feral","night fury, toothless, anthro"
Z parameter - Seed - 10042, 20042, 30042, 40042, 50042, 60042, 70042, 80042, 90042, 100042, 110042, 120042

Tournament Ω: One True LoRA
The Tournament Ω part 1
The Tournament Ω part 2
1st place: STYLE_BAHNBAHN_V1SKS_LOHA8S-000012: 66
2nd place: STYLE_BAHNBAHN_V1SKS_LOCON16S-000005: 21
3nd place: STYLE_BAHNBAHN_V1SKS_LOHA16SS-000012: 9
And so, the Ultimate Leader, and therefore, First Version, and therefore First Available Soul is: LoHA of 8 dimensions, Slow Learning, SKS-style, High Dropout Rate, ADAM, 12th epoch.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)