paint-brush

This story draft by @synthesizing has not been reviewed by an editor, YET.

SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis: Acknowledgements

Synthesizing HackerNoon profile picture
0-item

Authors:

(1) Dustin Podell, Stability AI, Applied Research; (2) Zion English, Stability AI, Applied Research; (3) Kyle Lacey, Stability AI, Applied Research; (4) Andreas Blattmann, Stability AI, Applied Research; (5) Tim Dockhorn, Stability AI, Applied Research; (6) Jonas Müller, Stability AI, Applied Research; (7) Joe Penna, Stability AI, Applied Research; (8) Robin Rombach, Stability AI, Applied Research.

Table of Links

Abstract and 1 Introduction

2 Improving Stable Diffusion

2.1 Architecture & Scale

2.2 Micro-Conditioning

2.3 Multi-Aspect Training

2.4 Improved Autoencoder and 2.5 Putting Everything Together

3 Future Work


Appendix

A Acknowledgements

B Limitations

C Diffusion Models

D Comparison to the State of the Art

E Comparison to Midjourney v5.1

F On FID Assessment of Generative Text-Image Foundation Models

G Additional Comparison between Single- and Two-Stage SDXL pipeline

H Comparison between SD 1.5 vs. SD 2.1 vs. SDXL

I Multi-Aspect Training Hyperparameters

J Pseudo-code for Conditioning Concatenation along the Channel Axis

References

A Acknowledgements

We thank all the folks at StabilityAI who worked on comparisons, code, etc, in particular: Alex Goodwin, Benjamin Aubin, Bill Cusick, Dennis Nitrosocke Niedworok, Dominik Lorenz, Harry Saini, Ian Johnson, Ju Huo, Katie May, Mohamad Diab, Peter Baylies, Rahim Entezari, Yam Levi, Yannik Marek, Yizhou Zheng. We also thank ChatGPT for providing writing assistance.


This paper is under CC BY 4.0 DEED license.


바카라사이트 바카라사이트 온라인바카라