Skip to main content

Show HN: InvokeAI, an open source Stable Diffusion toolkit and WebUI https://ift.tt/TJ89Zix

Show HN: InvokeAI, an open source Stable Diffusion toolkit and WebUI Hey everyone! Excited to be able to share the release of `InvokeAI 2.0 - A Stable Diffusion Toolkit`, an open source project that aims to provide both enthusiasts and professionals a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2). InvokeAI was one of the earliest forks off of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit titled InvokeAI. The new version of the tool introduces an entirely new WebUI Front-end with a Desktop mode, and an optimized back-end server that can be interacted with via CLI or extended with your own fork. This version of the app improves in-app workflows leveraging GFPGAN and Codeformer for face restoration, and RealESRGAN upscaling - Additionally, the CLI also supports a large variety of features: - Inpainting - Outpainting - Prompt Unconditioning - Textual Inversion - Improved Quality for Hi-Resolution Images (Embiggen, Hi-res Fixes, etc.) - And more... Future updates planned included UI driven outpainting/inpainting, robust Cross Attention support, and an advanced node workflow for automating and sharing your workflows with the community. We're excited by the release, and about the future of democratizing the ability to create. Check out the repo ( https://ift.tt/bLaIAWq ) to get started, and join us on Discord ( https://ift.tt/AlnJwrv )! https://ift.tt/bLaIAWq October 11, 2022 at 12:48AM

Comments

Popular posts from this blog