Learn Dampening: Difference between revisions
Jump to navigation
Jump to search
(Created page with "The goal of training is (generally) to fit the most number of Steps in, without Overcooking. Certain settings, by design, or coincidentally, "dampen" learning, allowing us to train more steps before the LoRA appears Overcooked. Some settings which affect Dampening include Network Alpha and Noise Offset.") |
No edit summary |
||
Line 1: | Line 1: | ||
The goal of [[training]] is (generally) to fit the most number of Steps in, without [[Overfitting|Overcooking]]. Certain settings, by design, or coincidentally, "dampen" learning, allowing us to train more steps before the [[Low-Rank Adaptation|LoRA]] appears [[Overfitting|Overcooked]]. Some settings which affect Dampening include Network Alpha and Noise Offset. | The goal of [[training]] is (generally) to fit the most number of Steps in, without [[Overfitting|Overcooking]]. Certain settings, by design, or coincidentally, "dampen" learning, allowing us to train more steps before the [[Low-Rank Adaptation|LoRA]] appears [[Overfitting|Overcooked]]. Some settings which affect Dampening include Network Alpha and Noise Offset. | ||
[[Category:Training]] |
Latest revision as of 07:41, 11 October 2023
The goal of training is (generally) to fit the most number of Steps in, without Overcooking. Certain settings, by design, or coincidentally, "dampen" learning, allowing us to train more steps before the LoRA appears Overcooked. Some settings which affect Dampening include Network Alpha and Noise Offset.