5 workflow, SDXL uses a double pass, there's a base model and a refine model, you shouldn't use the refine model alone. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. What is the best workflow you know of? Outpainting is the same thing as inpainting. They are from inpaint nodes using navier stroke to the mask areas. . A default value of 6 is good in most Creating such workflow with default core nodes of ComfyUI is not possible at the moment. Created by: Prompting Pixels: Basic Outpainting Workflow Outpainting shares similarities with inpainting, primarily in that it benefits from utilizing an inpainting model trained on partial image data sets for the task. Please keep posted images SFW. So Input image change quite a bit. 5 and SDXL version. There is a “Pad Image for Outpainting” node to automatically pad the image for outpainting while creating the proper mask. I really like the drag n drop PNG workflow method, you can have like LORA, INPAINTING, OUTPAINTING, ControlNET in different browser tab. Belittling their efforts will get you banned. In this example this image will be outpainted: Using the v2 inpainting model and the “Pad Image for Outpainting” node (load it in ComfyUI to see the workflow): Welcome to the unofficial ComfyUI subreddit. I've noticed after about 8 or so keyframes a vignette appears around the edge and this gradually turns each keyframe after that progressivly darker, even though I'm specifying 'daytime', so by 20 frames it looks like a fake night/day image. 5-inpainting model is still the best for outpainting, and the prompt and other settings can drastically change the quality. In 1111 using image to image, you can batch load all frames of a video, batch load control net images, or even masks, and as long as they share the same name as the main video frames they will be associated with the image when batch processing. Second, it is important to note, that for now it is working only with white background image inputs. Next, install RGThree's custom node pack, from the manager. 118 votes, 83 comments. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Or check it out in the app stores outpainting workflow #comfyui r/comfyui If necessary, updates of the workflow will be made available on Github. Here is my workflow. 5 including Multi-ControlNet, LoRA, Aspect Ratio, Process Switches, and many more nodes. There is also a UltimateSDUpscale node suite (as an extension). ControlNet, on the other hand, conveys it in the form of images. It can create coherent animated outpaintings from the initial video. As long as you don't have that installed you should be fine. A good place to start if you have no idea how any of this works is the: 11K subscribers in the comfyui community. 23K subscribers in the comfyui community. because some bits of comfyUI automatically update (when launched for example) you will sometimes find it in a broken state. Nobody answered and I get downvoted just for asking. The power prompt node replaces your positive and negative prompts in a comfy workflow. 5 using 2 controlnets for the body, but I need another control net for the facial expressions. Reference only and LCM for promptless outpainting, img2img, style transfer, and image blending (interesting results with fast speed) It's conflicting with Comfyui-Inference-core-nodes, which looks like it's just copying a bunch of nodes from all over the place into a single repo. I want to create parametric outpainting mask for AI image generation. GitHub - gokayfem/ComfyUI-Texture-Simple: Visualize your textures inside ComfyUI. Dont mind the lines going down you dont need them. Is this what it takes to use the workflow? Welcome to the unofficial ComfyUI subreddit. No. The alpha version is now online for limited early access. This workflow allows you to change clothes or objects in an existing image If you know the required style, you can work with the IP-Adapter and upload a reference image And if you want to get new ideas or directions for design, you can create a large amount of variations in a process that is mostly automatic I am so sorry but my video is outdated now because ComfyUI has officially implemented the a SVD natively, update ComfyUI and copy the previously downloaded models from the ComfyUI-SVD checkpoints to your comfy models SVD folder and just delete the custom nodes ComfyUI-SVD. The clipdrop "uncrop" gave really good This repo contains examples of what is achievable with ComfyUI. If anybody has tips or resources to share, it would be really helpful! Welcome to the unofficial ComfyUI subreddit. I haven't seen any working posts on outpainting anime scenes or outpainting vid2vid, just research papers I don't understand. 512x512. Currently I have the Lora Stacker from efficiency nodes, but it works only with the propietary Efficient KSampler node, and to make it worse the repository has been archived on Jan 9, 2024, meaning it could permanently stop working with the next comfyui update any minute now. Update 8/28/2023 Thanks to u/wawawa64 i was able to get a working functional workflow that looks like this!. if you have any examples or workflows for it, I would love to take a look! You're using a 1. When outpainting in ComfyUI, you'll pass your source image through the Pad Image for Outpainting node. I am stuck since in certain combinations ouf parameters I get white borders around image (mask stretches to cover final image) becouse mask ratio is different to ratio of latent space (partially since side of latent space must be multiple of m becouse of at the beginning latent space size is 1/m of output). This workflow shows you how and it also adds a final pass with the SDXL refiner to fix any possible seamline generated by the inpainting process. This pack includes a node called "power prompt". You need a proper workflow or a finetuned model that doesn't uses the refiner Welcome to the unofficial ComfyUI subreddit. So, when you download the AP Workflow (or any other workflow), you have to review each and every node to be sure that they point to your version of the model that you see in the picture. If I start from a cleared workflow and drag the preferred workflow to the stage, it takes about 30 seconds. . I just did a screenshot is you dont mind. Has anyone found a good outpainting solution in ComfyUI? I've tried three or four workflows I found online (following tutorials mostly) and find the generations have nothing to do with the image being outpainted, meaning I haven't found a solution for continuity. ComfyUI doesn't have a mechanism to help you map your paths and models against my paths and models. Product Photo relighting workflow - Start from a existing picture or generate a product, segment the subject via SAM, generate a new background, relight the picture, keep finer details r/StableDiffusion • The BEST ComfyUI Workflow for Inpainting & Outpainting. We call it Combix. Searching on the internet I saw the Laion face control net model that is built for this purpose, but it works on sd2. I asked about video outpainting somedays ago. [RESOLVED!] - corrected nodes Hi, I've been learning how to do infinite zoom videos, using outpainting. Obviously the outpainting at the top has a harsh break in continuity, but the outpainting at her hips is ok-ish. Please share your tips, tricks, and workflows for using this software to create your AI art. For instance if I pad the left side of an image that is 768,768 to an extra 1000 pixels, I get a very different result than doing it in 5 200px increments, but it becomes very spaghetti very fast doing it this way. A lot of people are just discovering this technology, and want to show off what they created. io) Also it can be very diffcult to get the position and prompt for the conditions. 5. It would be nice to enhance linked ID based Sender & Receiver, to have filename & folder based Sender & Receiver. First it is not pasting back the original image (like with your product workflow). Is there anything similar available in ComfyUI? I'm specifically looking for an outpainting workflow that can match the existing style and subject matter of the base image similar to what LaMa is capable of. This node can be found in the Add Node > Image > Pad Image for Outpainting menu. Pretty much the title. Here is my take on a regional prompting workflow with the following features : 3 adjustable zones, by setting 2 position ratios vertical / horizontal switch Experimental LCM Workflow "The Ravens" for Würstchen v3 aka Stable Cascade is up and ready for download. github. I just published a YouTube tutorial showing how to leverage the new SDXL Turbo model inside Comfy UI for creative workflows. Area Composition Examples | ComfyUI_examples (comfyanonymous. Explore new ways of using Würstchen v3 architecture and gain a unique experience that sets it apart from SDXL and SD1. And above all, BE NICE. We aim to take care of the complicated setup work for you and let you only spend time on creating fun stuff. The inpaint_only +Lama ControlNet in A1111 produces some amazing results. We would like to show you a description here but the site won’t allow us. I made a workflow for animated outpainting with static cameras and low resolutions, it works well but has these limitations. Please share your tips, tricks, and workflows for using this… Welcome to the unofficial ComfyUI subreddit. the 1. Welcome to the unofficial ComfyUI subreddit. 784x512. Outpainting with Controlnet and the Photopea extension (fast, with low resources and easy) Tutorial | Guide I see many people talking about this, so I'm going to give you my easy workflow. There are other branches available on git, you can checkout specific commits. I have also experienced that ComfyUI has lost individual cable connections for no comprehensible reason or nodes have not worked until they have been replaced by the same node with the same wiring. This are the only nodes you need in my workflow. The node allows you to expand a photo in any direction along with specifying the amount of feathering to apply to the edge. I'm especially interested if there's anything that is good at humans--given someone's torso, generate legs through outpainting that look natural and match their body. Just looking for a ComfyUI workflow for outpainting using reference only for prompt or promptless outpainting for SDXL. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5 by using XL in comfy. Basically the author of lcm (simianluo) used a diffusers model format, and that can be loaded with the deprecated UnetLoader node. Release: AP Workflow 7. In my workflow I can recreate the same loop 5 times to get a nicer outpaint in smaller segments than in big jumps. So I tried to create the outpainting workflow from the ComfyUI example site. Apr 21, 2024 ยท The grow_mask_by setting adds padding to the mask to give the model more room to work with and provides better results. This is because the outpainting process essentially treats the image as a partial image by adding a mask to it. Any suggestions Outpainting: Works great but is basically a rerun of the whole thing so takes twice as much time. I just installed SDXL 0. Using text has its limitations in conveying your intentions to the AI model. What's a good workflow for outpainting? The few comfyui outpainting workflows I seem to find don't perform well. I'll make this more clear in the documentation. However, I'm facing a bit of a challenge when it comes to gener Hey r/comfyui, . Get the Reddit app Scan this QR code to download the app now. Release: AP Workflow 9. Is there a way to copy normal webUI parameters ( the usual PNG info) into ComfyUI directly with a simple ctrlC ctrlV? Dragging and dropping 1111 PNGs into ComfyUI works most of the time. You can do infinite zoom animations using an outpainting workflow and the 'Impact' custom nodes (imageSender & imageReciever) Reply reply Unreal_777 Welcome to the unofficial ComfyUI subreddit. TLDR: Question: i want to take a 512x512 image that i generate in txt2img and then in the same workflow, send it to controlnet inpaint to make it 740x512, by extending the left and right side of it. This workflow generates an image with SD1. This workflow chains together multiple IPAdapters, which allows you to change one piece of the AI Avatar's clothing individually. However, due to the more stringent requirements, while it can generate the intended images, it should be used carefully as conflicts between the interpretation of the AI model and ControlNet's enforcement can lead to a degradation in quality. You can see the underlying code here. Dear fellow ComfyUI enthusiasts, a couple of friends of mine and I have been working on a hassle-free online version of ComfyUI. 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. Details on how to use the workflow are in the workflow link. I didn't say my workflow was flawless, but it showed that outpainting generally is possible. However, this can be clarified by reloading the workflow or by asking questions. 1 (as far as I saw). I demonstrate this process in a video if you want to follow Hey Reddit community! I'm currently working on a fashion niche website where clients can upload their clothing items, and we provide them with a set of high-quality photos in various poses to showcase their products. (possibly for automatic1111, but I only use comfyUI now) I had seen a tutorial method a while back that would allow you upscale your image by grid areas, potentially allow you to specify the "desired grid size" on the output of an upscale and how many grids, (rows and columns) you wanted. Being the control freak that I am, I took the base refiner image into Automatic111 and inpainted the eyes and lips. Since adding endless lora nodes tends to mess the simplest workflow, I'm looking for a plugin with a lora stacker node. GitHub - Suzie1/ComfyUI_Comfyroll_CustomNodes: Custom nodes for SDXL and SD1. 0 for ComfyUI - Now featuring SUPIR next-gen upscaler, IPAdapter Plus v2 nodes, a brand new Prompt Enricher, Dall-E 3 image generation, an advanced XYZ Plot, 2 types of automatic image selectors, and the capability to automatically generate captions for an image directory Hello! I am currently trying to figure out how to build a crude video inpainting workflow that will allow me to create rips or tears in the surface of a video so that I can create a video that looks similar to a paper collage- meaning that in the hole of the ‘torn’ video, you can see an animation peaking through the hole- I have included an example of the type of masking I am imagining This workflow allows you to load images of an AI Avatar's face, shirt, pants and shoes and pose generates a fashion image based on your prompt. Please share your tips, tricks, and… Install ComfyUI. Does anyone have any links to tutorials for "outpainting" or "stretch and fill" - expanding a photo by generating noise via prompt but matching the photo? I've done it on Automatic 1111, but its not been the best result - I could spend more time and get better, but I've been trying to switch to ComfyUI. I have a vid2vid workflow that runs sd1. There's an SD1. It animates 16 frames and uses the looping context options to make a video that loops. Install ComfyUI Manager. Then ported it into Photoshop for further finishing a slight gradient layer to enhance the warm to cool lighting and with the Camera Raw Filter to add just a little sharpening Outpainting | Expandir Imagem O workflow de outpainting de imagem apresenta um processo abrangente para estender os limites de uma imagem através de quatro etapas principais, começando com a preparação para o outpainting, utilizando um modelo de inpainting do ControlNet para o processo de outpainting, avaliando a saída inicial e concluindo com o reparo das bordas para garantir uma Hey all- I'm attempting to replicate my workflow from 1111 and SD1. Now you can manage custom nodes within the app. Any help or guidance would be greatly appreciated! Thanks!! /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 9 and ran it through ComfyUI. If I generate consecutive renders (1 image per queue prompt) after that it takes about 11 seconds. I've been working really hard to make lcm work with ksampler, but the math and code are too complex for me I guess. I found, I could reduce the breaks with tweaking the values and schedules for refiner. Updated: Inpainting only on masked area, outpainting, and seamless blending (includes custom nodes, workflow, and video tutorial) upvotes · comments r/StableDiffusion There are a lot of upscale variants in ComfyUI. for CR Seamless Checker. Get ready to take your image editing to the next level! I've spent countless hours testing and refining ComfyUI nodes to create There are dozens of parameters for SD outpainting and the biggest factor is the checkpoint used. they just connect to another ksampler for some experimental stuff. In the video, I go over how to set up three workflows text-to-image, image-to-image, and high res image upscaling. 0 for ComfyUI - Now with support for Stable Diffusion Video, a better Upscaler, a new Caption Generator, a new Inpainter (w inpainting/outpainting masks), a new Watermarker, support for Kohya Deep Shrink, Self-Attention, StyleAligned, Perp-Neg, and IPAdapter attention mask welcome to open source "bleeding edge" software. this usually gets fixed in a short while. I think the DALL-E 3 does a good job of following prompts to create images, but Microsoft Image Creator only supports 1024x1024 sizes, so I thought it would be nice to outpaint with ComfyUI. zgqrvfihdacmqeabgswh