Latest updates: 4 new models, big improvements to drafts, and ThisIllustrationDoesNotExist.com
📣 Accomplice product updates going into Thanksgiving
4️⃣ Huge enhancement: 4 new models!
The three most common pieces of feedback I’ve gotten since Accomplice launched:
"I wish Accomplice was…
- better at faces”
- more photorealistic”
- better at animals”
And I knew getting better at these things meant supporting more models. So I got to work! But, first, what is a model? At its simplest, a model is a program that has been trained on a set of data – AI isn't possible without lots and lots of data – where its function is usually to either recognize a pattern in the data, or do something with the data. With Accomplice the thing a model does with the data is create images.
In order to do that, an image creation model is trained on thousands and thousands of images, and it uses the things that it learns and eventually knows about those images to create things based off what it also knows about the words and sentences you type into it.
The goal with Accomplice has always been to support multiple models that would then allow for more types of images.
But, in the beginning, Accomplice only used the model ImageNet because it is widely considered one of the most versatile VQGAN image models (and also because the goal in the early days was simply to get something out the door that worked!)
Today, however, I'm very happy to announce Accomplice’s support for 4 more image models, officially quintupling the number of unique designs and illustrations you can create!
🎨 Wikiart: painting and illustration
Wikiart is an open source model built only off works of art that are in the public domain. You'll get a lot of wonderful broad brush strokes and the unmistakable look of paint on canvas with Wikiart.
📸 Sflckr: oftentimes more photorealistic
Sflckr is referencing only creative commons photos from Flickr and you can get some truly amazing, oftentimes much more photorealistic prompts with it.
🤪 FacesHQ: better at faces
Faces! Nothing but faces is what FacesHQ has been trained on. It might take a handful of tries, and as always if you’re looking for something specific an image prompt always helps, but the results you can get with FacesHQ are worlds better than what ImageNet was previously able to do.
🐶 D-RIN: oftentimes better at animals
AI still isn't amazing at animals – they're hard! – but D-RIN is a more animal heavy model that – especially when using an image prompt as a starting point and fast as a quality setting (and maybe trying a few times until you get something you like) – does often do better than the default ImageNet model.
So, now, when you create a prompt, you'll see a new Model dropdown.
The models aren't a silver bullet! They are still just doing the best they can with what they have and the direction you provide them. But they are a great first glimpse at where Accomplice is going: more models means more chances at getting what you're looking for!
And who knows, one day soon, maybe you'll be able to use Accomplice to quickly and easily create your own models that create exactly what you and you're team are looking for using a simple drag-and-drop interface… 🤔
⚡️ Drafts -> “Fast” quality prompts
So, you know how when you use just a text prompt, you’re almost always happiest with iteration #180 or #200? Sure, there are those times when an earlier iteration is better, but more often than not it's the last couple that are your best bet.
But have you also noticed, when using a text prompt combined with the new image prompt feature that was released a few weeks ago, how often you end up using iteration #20 or #40 or #60 instead? (especially when you’re looking for a “style transfer” look)
In fact I started using "draft" quality more often and, I'm sure like a lot of you, wishing it acted just like a normal prompt so that I could catalog, archive, and share it.
Well now you can because “drafts” are now just “fast” prompts.
There will now just be fast (60 iterations) and normal (200 iterations) prompts since, especially when using an image prompt as a reference, draft was often better than normal for style transfer. And therefore not really a draft at all anymore.
And now fast prompts will be treated the same as normal prompts as far as collections, archiving, and everything else goes!
The Accomplice API is officially in beta and the – also in beta – documention is now available here.
The possible applications of being able to create images on the fly with just a piece of code are endless!
And to show off a little bit of what you can do with the Accomplice API I created ThisIllustrationDoesNotExist.com
It pulls a random design or illustration from the Accomplice community on every refresh.
It's just a small, fun, proof of concept of what the Accomplice read-only API can do, but if you and your team have something bigger in mind, please reach out to learn more about the Accomplice API.
🔅 Community highlights
As always there was so much new to announce that I didn’t get to everything, but you might also notice it’s now fewer steps to move prompts to and from the Archive, and there's a brand new marketing homepage that does a much better job of talking about and showing off the product, as well as all the amazing work everyone is creating with it. 💪🏼
Thanks for reading and supporting Accomplice! Have a great Thanksgiving and never hesitate to reach out and let me know what you think about any of the new features or really anything else at all! 🦃🍂