LoRA: Difference between revisions

From Civitai Wiki
Jump to navigation Jump to search
(LoRA page description)
(redirected to Low-Rank Adaptation)
Tags: New redirect Visual edit
 
(3 intermediate revisions by one other user not shown)
Line 1: Line 1:
'''LoRA (Low-Rank Adaptation)''' is a distinctive training technique utilized for the fine-tuning of Stable Diffusion models. It primarily addresses the challenges of balancing model file size and training power, providing a solution especially beneficial in the realm of AI art creation. Through LoRAs, users can effectively customize models without excessively burdening local storage resources. This is achieved by applying small alterations to the cross-attention layers of Stable Diffusion models, which is a critical part of the model where the image and the prompt interact. LoRA's innovative approach of breaking down large matrices into smaller, low-rank matrices significantly reduces the file size while retaining a decent training power, making it a practical choice for individuals and entities interested in exploring various stylistic adaptations and creative outputs.
#REDIRECT [[ Low-Rank Adaptation]]See [[Low-Rank Adaptation]].  
[[Category:Generative AI]]
[[Category:Generative AI]]

Latest revision as of 13:14, 2 February 2024