

Qwen edit can take an image as a sample, and work with that. “The character in a victorious pose” would get whatever character you have, and reproduce it in a victorious pose. Couple of examples:


And a little janky because it IS generative AI after all…

Edit: and a bonus screenshot showing how little effort I had to put into this lol










Yes-ish. The base is Draw Things and the relevant bits are https://github.com/drawthingsai/draw-things-community?tab=readme-ov-file#cuda-capable-linux that isn’t too difficult to setup. The app with the pretty interface is Apple only (the developer one day decided to cram the full 1.5 on his iPhone and that was the start of this. The app has feature parity between the iOS, iPad and Mac versions, the gRPC server is “just” the generation parts decoupled from the app) but there’s a Comfy plugin to use the server.
BTW on Apple’s hardware Comfy is poorly optimized, while Draw Things is optimized. The iPhone XR is the oldest hardware capable of on device generation, and (with the right settings) could do a SDXL 1024x1024 generation. 13 minutes mind you for 8 steps, but also 3gb of total system memory. On the other hand, the iPhone 17 Pro is a third of the speed of my RTX 3060. There’s also a friendly Discord, and the dev clearly enjoys adding support for new, cool models because he’s quick at it but doesn’t share roadmaps of any kind.
Yeah. I really, really like that thing.