Stable Diffusion XL: Difference between revisions

From Civitai Wiki
Jump to navigation Jump to search
(Created page with "Stability AI’s latest Stable Diffusion Model.")
 
No edit summary
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
[[Stability AI]]’s latest [[Stable Diffusion]] Model.
[[Stability AI]]’s latest [[Stable Diffusion]] Model, first proposed in [https://huggingface.co/papers/2307.01952 SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach].
 
 
The abstract from the paper is:
 
''We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators.''
 
{{Template:Disclaim-external-links}}
 
https://huggingface.co/papers/2307.01952

Latest revision as of 14:04, 2 February 2024

Stability AI’s latest Stable Diffusion Model, first proposed in SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach.


The abstract from the paper is:

We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators.


External Links

Please note that the content of external links are not endorsed or verified by us and can change with no notice. Use at your own risk.

https://huggingface.co/papers/2307.01952