Comfyui load workflow example reddit

Comfyui load workflow example reddit. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. I might do an issue in ComfyUI about that. You can then load or drag the following image in ComfyUI to get the workflow: ComfyUI Examples. You can see it's a bit chaotic in this case but it works. Thank you u/AIrjen!Love the variant generator, super cool. Somebody suggested that the previous version of this workflow was a bit too messy, so this is an attempt to address the issue while guaranteeing room for future growth (the different segments of the Bus can be moved horizontally and vertically to enlarge each section/function. sft file in your: ComfyUI/models/unet/ folder. SDXL Default ComfyUI workflow. 0. I tried to find either of those two examples, but I have so many damn images I couldn't find them. You can find the Flux Dev diffusion model weights here. But when I'm doing it from a work PC or a tablet it is an inconvenience to obtain my previous workflow. Jul 6, 2024 · Download the first image on this page and drop it in ComfyUI to load the Hi-Res Fix workflow. Breakdown of workflow content. Just my two cents. or through searching reddit, the comfyUI manual needs updating imo. I'll do you one better, and send you a png you can directly load into Comfy. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next one, and so on, how can I make my workflow to grab images from a folder and for each queued gen, it loads the 001 image from the folder, and for the next gen, grab the 002 image from the same folder? That's a bit presumptuous considering you don't know my requirements. be/ppE1W0-LJas - the tutorial. This could lead users to increase pressure to developers. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone posted. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. 1; Overview of different versions of Flux. We would like to show you a description here but the site won’t allow us. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current Welcome to the unofficial ComfyUI subreddit. This is a more complex example but also shows you the power of ComfyUI. I even have a working sdxl example in raw python on the readme. Here's a quick example where the lines from the scribble actually overlap with the pose. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. WAS suite has some workflow stuff in its github links somewhere as well. So, I just made this workflow ComfyUI. The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably Using the Comfy image saver node will add EXIF fields that can be read by IIB, so you can view the prompt for each image without needing to drag/drop every single one. The image blank can be used to copy (clipspace) to both the load image nodes, then from there you just paint your masks, set your prompts (only the base negative prompt is used in this flow) and go. Just load your image, and prompt and go. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) 10 upvotes · comments One trick I learned yesterday that makes sharing workflows easier when those include pictures and videos: use the Load Video (Path) node, post your video source online (on imgur for example), and link to it via that node with a simple URL. It's simple and straight to the point. Also notice that you can download that image and drag'n'drop it to your comfyui to load that workflow and you can also drag'n'drop images to Load Image node to quicker load them And another general difference is that A1111 when you set 20 steps 0. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. I can load workflows from the example images through localhost:8188, this seems to work fine. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. Of course with so much power also comes a steep learning curve, but it is well worth it IMHO. Share, discover, & run thousands of ComfyUI workflows. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. To add content, your account must be vetted/verified. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. Put the flux1-dev. Adding same JSONs to main repo would only add more hell to commits history and just unnecessary duplicate of already existing examples repo. Help, pls? comments sorted by Best Top New Controversial Q&A Add a Comment You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Please keep posted images SFW. Here are approx. It'll load a basic SDXL workflow that includes a bunch of notes explaining things. Still working on the the whole thing but I got the idea down Yes, 8Gb card, ComfyUI workflow loads both SDXL base & refiner models, separate XL VAE, 3 XL LoRAs, plus Face Detailer and its sam model and bbox detector model, and Ultimate SD Upscale with its ESRGAN model and input from the same base SDXL model all work together. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. If you have the SDXL 1. 1 or not. I recently switched from A1111 to ComfyUI to mess around AI generated image. Upcoming tutorial - SDXL Lora + using 1. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. pngs of metadata. So every time I reconnect I have to load a presaved workflow to continue where I started. Upscaling ComfyUI workflow. ai/profile/neuralunk?sort=most_liked. Then restart ComfyUI. The workflow in the example is passed in via the script in inline string, but it's better (and more flexible) to have your python script load that from a file instead. I used to work with Latent Couple then Regional prompter module for A1111, which allowed me to generate separate regions of an image through masks and guided with ControlNets (for instance, generate several characters using poses derived from a preprocessed picture). problem with using the comfyUI manager is if your comfyui won't load you are SOL fixing it. ControlNet Depth ComfyUI workflow. I cant load workflows from the example images using a second computer. 0 Base and Refiners models downloaded and saved in the right place, it should work out of the box. Create animations with AnimateDiff. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. something of an advantage comfyUI has over other interfaces is that the user has full control over every step of the process which allows you to load and unload models, images and use stuff entirely in latent space if you want. I see youtubers drag images into ComfyUI and they get a full workflow, but when I do it, I can't seem to load any workflows. You should now be able to load the workflow, which is here. Comfy Workflows Comfy Workflows. They do overlap. This repo contains examples of what is achievable with ComfyUI. Welcome to the unofficial ComfyUI subreddit. Instead, I created a simplified 2048X2048 workflow. It is not much an inconvenience when I'm at my main PC. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. The EXIF data won't capture the entire workflow but to quickly see an overview of a generated image, this is the best you can currently get. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. 1. This is done using WAS nodes. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. You can encode then decode bck to a normal ksampler with an 1. . 168. Please share your tips, tricks, and workflows for using this software to create your AI art. Many of the workflow examples can be copied either visually or by downloading a shared file containing the workflow. I use a google colab VM to run Comfyui. I had to place the image into a zip, because people have told me that Reddit strips . This guide is about how to setup ComfyUI on your Windows computer to run Flux. If anyone else is reading this and wanting the workflows, here's a few simple SDXL workflows, using the new OneButtonPrompt nodes, saving prompt to file (I don't guarantee tidiness): That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. K12sysadmin is open to view and closed to post. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. Nobody needs all that, LOL. 0 and upscalers I think it was 3DS Max. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. After studying the nodes and edges, you will know exactly what Hi-Res Fix is. But let me know if you need help replicating some of the concepts in my process. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. You can just use someone elses workflow of 0. Really happy with how this is working. com/. 1 with ComfyUI Aug 2, 2024 · Flux Dev. 1; Flux Hardware Requirements; How to install and use Flux. 1:8188 but when i try to load a flow through one of the example images it just does nothing. Initial Input block - Welcome to the unofficial ComfyUI subreddit. Study this workflow and notes to understand the basics of ComfyUI, SDXL, and Refiner workflow. Table of contents. Same workflow as the image I posted but with the first image being different. I can load the comfyui through 192. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. I actually just released an open source extension that will convert any native ComfyUI workflow into executable Python code that will run without the server. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. 5 with lcm with 4 steps and 0. Any ideas on this? Starting workflow. For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. 9(just search in youtube sdxl 0. it's nothing spectacular but gives good consistent results without Welcome to the unofficial ComfyUI subreddit. Its just not intended as an upscale from the resolution used in the base model stage. eh, if you build the right workflow, it will pop out 2k and 8k images without the need for alot of ram. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. Flux Schnell is a distilled 4 step model. 9 leaked repo, you can read the README. Using the Comfy image saver node will add EXIF fields that can be read by IIB, so you can view the prompt for each image without needing to drag/drop every single one. K12sysadmin is for K12 techs. Workflow. 6 min read. Img2Img ComfyUI workflow. I couldn't find the workflows to directly import into Comfy. You need to select the directory your frames are located in (ie. Besides, by recording the precise "workflow" (= the collection of interconnected nodes), you even get reasonably good reproducibility, namely, if you load the workflow and change nothing (including the seed) you should get exactly the same result. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. If you have the SDXL 0. 4 - The best workflow examples are through the github examples pages. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. Merging 2 Images together. ComfyUI needs a stand alone node manager imo, something that can do the whole install process and make sure the correct install paths are being used for modules. and if you copy it into comfyUI, it will output a text string which you can then plug into you 'Clip text encoder' node and it is then used as your SD prompt. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". You can then load or drag the following image in ComfyUI to get the workflow: Load Image Node. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. 8 denoise won't have actually 20 steps but rather decrease that amount to 16. https://youtu. 1 ComfyUI install guidance, workflow and example. It covers the following topics: Introduction to Flux. second pic. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation ComfyUI already has examples repo where you can instantly load all cool native workflows just by drag'n'dropping picture from that repo. this is just a simple node build off what's given and some of the newer nodes that have come out. Ending Workflow. Hope you like some of them :) Flux. xgv zsfvkf lqgko siiekv sdor fepgm bxpzbf usbj sfpjv gwflh

/