> Look at Stable Diffusion 1.5 and LLaMA: They are thriving, but the original implementations are ancient history, and Meta/StabilityAI/RunawayML have done precisely nothing.
I mean, that’s true of SD 1.5 in the sense that what the original creators have done since is new versions (SD 2.0, 2.1, and currently SDXL, which is apparent another SD2-architecture model, and DeepFloyd.) 2.1 has also seen some community uptake, and XL likely will once it is released unless there’s something inhibiting that. DF seems to be slowed by different architecture and high resource cost, but I’ve seen posts about people integrating the DeepFloyd early stage models with other models from the SD ecosystem for the last stage upscaling and final rendering, so I wouldn’t be surprised to see it integrated in some of the community UIs as both an integrated workflow and with access to the individual models for mix-and-match workflows.
I dunno. Theres some experimentation with 2.1, but the consensus seems to be that it produces inferior output to 1.5 outside of some niches, and thats before taking the 768x768 1.5 finetunes into account.
Deepfloyd is niche.
SDXL is indeed interesting, especially if its happy with 4/8 bit quant... we will see about that.
Nevertheless StabilityAI seems kinda disconnected from all the innovations going on in the community compared to, say, huggingface.
It's a faulty consensus that came from people comparing outputs from the base model with fine tunes of the SD1.5 model. SD2.1 actually is a far superior model once fine-tuned.
I agree that it's a shame that StabilityAI seem to struggle so much to actually leverage their community (ideally with much more open development)... One could say they're a little too "full of themselves" and think they know better than everyone else.
I mean, that’s true of SD 1.5 in the sense that what the original creators have done since is new versions (SD 2.0, 2.1, and currently SDXL, which is apparent another SD2-architecture model, and DeepFloyd.) 2.1 has also seen some community uptake, and XL likely will once it is released unless there’s something inhibiting that. DF seems to be slowed by different architecture and high resource cost, but I’ve seen posts about people integrating the DeepFloyd early stage models with other models from the SD ecosystem for the last stage upscaling and final rendering, so I wouldn’t be surprised to see it integrated in some of the community UIs as both an integrated workflow and with access to the individual models for mix-and-match workflows.