A powerful open source CLIP- Guided Diffusion model that can create detailed, realistic images
Flexible in the same way the VQGAN ImageNET and WikiArt models are, the Disco Diffusion model makes amazing images – especially abstract imagery – that uses sometimes deep, and sometimes vibrant, colors and grainy imagery to create masterpieces.
Original notebook by Katherine Crowson (https://github.com/crowsonkb, https://twitter.com/RiversHaveWings). It uses her fine-tuned 512x512 diffusion model (https://github.com/openai/guided-diffusion), together with CLIP (https://github.com/openai/CLIP) to connect text prompts with images.
Modified by Daniel Russell (https://github.com/russelldc, https://twitter.com/danielrussruss) for faster generations as well as more robust augmentations.
Further improvements from Dango233 and nsheppard helped improve the quality of diffusion in general, and especially so for shorter runs like this notebook aims to achieve.
Vark added code to load in multiple Clip models at once, which all prompts are evaluated against, which may greatly improve accuracy.
This model creates 1 image per credit
Browse images created using Disco Diffusion
Signup today and get 5 free credits to use on this and any of the other models Accomplice has to offer
Starting creating with this model for free