What is comfyui github example.

What is comfyui github example All old workflows still can be used For example, you can use text like a dog, [full body:fluffy:0. Follow their code on GitHub. 1 background image and 3 subjects. If you don’t have t5xxl_fp16. A very common practice is to generate a batch of 4 images and pick the best one to be upscaled and maybe apply some inpaint to it. 7. This guide is about how to setup ComfyUI on your Windows computer to run Flux. Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. GitHub community articles Repositories. If you want to draw two different characters together without blending their features, so you could try to check out this custom node. comfyui-example. (the cfg set in the sampler). 0. Installation¶ ConditioningZeroOut is supposed to ignore the prompt no matter what is written. js application. The LCM SDXL lora can be downloaded from here. . safetensors and put it in your ComfyUI/models/loras directory. Builds a new release using the latest stable core version; ComfyUI Frontend. 3] to use the prompt a dog, full body during the first 30% of sampling and a dog, fluffy during the last 70%. You can Load these images in ComfyUI to get the full workflow. I go to ComfyUI GitHub and read specification and installation instructions. Feb 8, 2025 · Run Lumina Image 2. This extension, as an extension of the Proof of Concept, lacks many features, is unstable, and has many parts that do not function properly. 1-schnell. g. At its core, ComfyUI serves as a bridge between the user and the underlying AI algorithms, making these powerful tools accessible to a much wider audience. Regular KSampler is incompatible with FLUX. safetensors Examples below are accompanied by a tutorial in my YouTube video. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. It is licensed under the Apache 2. 98) (best:1. This is a model that is modified from flux and has had some changes in the architecture. You will first need: Text encoder and VAE: comfyanonymous has 12 repositories available. Here’s a simple workflow in ComfyUI to do this with basic latent upscaling: Non latent Upscaling. LLM Agent Framework in ComfyUI includes MCP sever, Omost See what ComfyUI can do with the example workflows. Aug 2, 2024 · Good, i used CFG but it made the image blurry, i used regular KSampler node. The dev model gives me what looks like random RGB noise. Follow the ComfyUI manual installation instructions for Windows and Linux. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. May 12, 2025 · Flux. It stitches together an AI-generated horizontal panorama of a landscape depicting different seasons. Dec 2, 2024 · I pulled the latest comfyui version and ran the inference with the default "Load CLIP" node with t5xxl_fp16. Definition and Features. 1 ComfyUI Workflow. 3) (quality:1. HiDream. Feb 21, 2025 · ComfyUI is a node-based GUI for Stable Diffusion. For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny. But the new comfyui version broke the "CLIPLoader (GGUF)" node. Face Masking feature is available now, just add the "ReActorMaskHelper" Node to the workflow and connect it as shown below: Examples of ComfyUI workflows. In Aug 2, 2024 · ComfyUI noob here, I have downloaded fresh ComfyUI windows portable, downloaded t5xxl_fp16. Instead, you can use Impact/Inspire Pack's KSampler with Negative Cond Placeholder. Nov 29, 2023 · There's a basic workflow included in this repo and a few examples in the examples directory. The total steps is 16. A full list of relevant nodes can be found in the sidebar. This toolkit is designed to simplify the process of serving your ComfyUI workflow, making image generation bots easier than ever before. safetensors and t5xxl) if you don’t have them already in your ComfyUI Follow the ComfyUI manual installation instructions for Windows and Linux. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. More than 150 million people use GitHub to discover, fork, and contribute to over 420 million projects. These are examples demonstrating the ConditioningSetArea node. There are many workflows included in the examples directory. Lightricks LTX-Video Model. Create an account on ComfyDeply setup your Communicate with ComfyUI via API and Websocket. safetensors:0. GeometricCFGGuider: Samples the two conditionings, then blends between them using a user-chosen alpha. 75 and the last frame 2. But you do get images. There are always readme and instructions. Examples of such are guiding the process towards certain compositions using the Conditioning (Set Area), Conditioning (Set Mask), or GLIGEN Textbox Apply node. Usually it's a good idea to lower the weight to at least 0. Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. - comfyanonymous/ComfyUI May 12, 2025 · Flux. Reload the ComfyUI page after the update. A The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Files to Download. Then I may discover that ComfyUI on Windows works only with Nvidia cards and AMD needs directml which is slow, etc. safetensors and save it to your ComfyUI/models Examples of ComfyUI workflows. Mar 13, 2023 · You can get an example of the json_data_object by enabling Dev Mode in the ComfyUI settings, and then clicking the newly added export button. 81) In ComfyUI the strengths are not averaged out like this so it will use the strengths exactly as you prompt them. GitHub Advanced Security. Say, for example, you want to upscale an image, and you may want to use different models to do the upscale. This example contains 4 images composited together. This ComfyUI node setup demonstrates how the Stable Diffusion conditioning mechanism functions. Or providing additional visual hints through nodes such as the Apply Style Model, Apply ControlNet or unCLIP Conditioning node. py For example, if you for some reason do not want the advanced features of PCTextEncode, use NODE(CLIPTextEncode) in the prompt and you'll still get scheduling with ComfyUI's regular TE node. Here is how you use it in ComfyUI (you can drag this into ComfyUI to get the workflow): noise_augmentation controls how closely the model will try to follow the image concept. 5. example¶ The KSampler is the core of any workflow and can be used to perform text to image and image to image generation tasks. 3B (1. This is simple custom node for ComfyUI which helps to generate images of actual couples, easier. Weekly frontend updates are merged into the core Mar 6, 2025 · TeaCache has now been integrated into ComfyUI and is compatible with the ComfyUI native nodes. Create an account on ComfyDeply setup your ComfyICU provides a robust REST API that allows you to seamlessly integrate and execute your custom ComfyUI workflows in production environments. 14) (girl:0. 1 Models. ComfyUI currently supports specifically the 7B and 14B text to video diffusion models and the 7B and 14B image to video diffusion models. Since ESRGAN Welcome to the ComfyUI Serving Toolkit, a powerful tool for serving image generation workflows in Discord and other platforms (soon). - teward/ComfyUI-Helper-Nodes Contribute to kijai/ComfyUI-MimicMotionWrapper development by creating an account on GitHub. You switched accounts on another tab or window. So you'd expect to get no images. Download it, rename it to: lcm_lora_sdxl. But it takes 670 seconds to render one example image of galaxy in a bottle. Here is an example of how the esrgan upscaler can be used for the upscaling step. The video was rendered correctly without noise at the beginning. You don't need to know how to write python code yourself. py Follow the ComfyUI manual installation instructions for Windows and Linux. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version. Highly Recommended: After the base installation, install the ComfyUI Manager extension (). 5b. Nvidia Cosmos is a family of “World Models”. The there are three variations based on the number of potentially selected strings (Small for 3, no suffix for 5, and Large for 10), and each node has a "compact" version which is worse for automated workflow but more comfortable if you intend to set all selection parameters (1 required, 2 optional) manually. The Wan2. Includes example workflows. LCM models are special models that are meant to be sampled in very few steps. The important thing with this model is to give it long descriptive prompts. This is a custom node for ComfyUI that allows you to use the KLing AI API directly in ComfyUI. It covers the following topics: The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. You will first need: Text encoder and VAE: May 12, 2025 · Wan2. If I wanted to do transitions like in the example above in the ComfyUI, I would have to make few times more nodes just to handle that prompt. 0) Serves as the foundation for the desktop release; ComfyUI Desktop. We would like to show you a description here but the site won’t allow us. py Welcome to the ComfyUI Community Docs!¶ This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. Examples of what is achievable with ComfyUI open in new window. With the schnell model or the fp8 checkpoint I can kinda barely see the image I'm supposed to be getting, but it's super noisy. 4) girl. This is what the workflow looks like in ComfyUI: ComfyUI Unique3D is custom nodes that running AiuniAI/Unique3D into ComfyUI - jtydhr88/ComfyUI-Unique3D Mar 2, 2025 · ComfyUI: An intuitive interface that makes interacting with your workflows a breeze. # The original idea has been adapted and extended to fit the current project's needs Below is an example video generated using the AnimateLCM-FaceID. , v0. # Many thanks to 2kpr for the original concept and implementation of memory-efficient offloading. It’s a modular framework designed to enhance the user experience and productivity when working with See what ComfyUI can do with the example workflows. I'm working on adding sequential CFG support where it's not done as a batch for less memory use though, which ends up faster as you don't have to use block swap etc. To use it you will need one of the t5xxl text encoder model files that you can find in: this repo, fp16 is recommended, if you don’t have that much memory fp8_scaled are recommended. Chroma. Since general shapes like poses and subjects are denoised in the first sampling steps this lets us for example position subjects with specific poses anywhere on the image while keeping a great amount of consistency. This node also allows use of loras just by typing <lora:SDXL/16mm_film_style. A simple interface demo showing how you can link Gradio and ComfyUI together. For example: 896x1152 or 1536x640 are good resolutions. In the above example the first frame will be cfg 1. A repository of well documented easy to follow workflows for ComfyUI - cubiq/ComfyUI_Workflows This repository showcases an example of how to create a comfyui app that can generate custom profile pictures for your social media. KLing AI API is based on top of KLing AI. Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard. Here, you'll find step - by - step instructions, in - depth explanations of key concepts, and practical examples that demystify the complex processes within ComfyUI. safetensors, clip_g. 0 (the min_cfg in the node) the middle frame 1. The lower the value the more it will follow the concept. First thing I always check when I want to install something is the github page of a program I want. 7> to load a LoRA with 70% strength. Here is an example you can drag in ComfyUI for inpainting, a reminder that you can right click images in the "Load Image" node and "Open in MaskEditor". Note that --force-fp16 will only work if you installed the latest pytorch nightly. Contribute to comfyanonymous/ComfyUI_examples development by creating an account on GitHub. Stable Zero123 is a diffusion model that given an image with an object and a simple background can generate images of that object from different angles. Note that we use a denoise value of less than 1. Windows (ComfyUI portable): python -m pip install -r ComfyUI\custom_nodes\ComfyUI-KLingAI-API\requirements The nodes are all called "Simple String Repository". Mar 4, 2024 · On ComfyUI you can see reinvented things (wiper blades or door handle are way different to real photo) On the real photo the car has a protective white paper on the hood that disappear on ComfyUI photo but you can see on replicate one The wheels are covered by plastic that you can see on replicate upscale, but not on ComfyUI. Contribute to zhongpei/comfyui-example development by creating an account on GitHub. Examples of ComfyUI workflows. ComfyUI_examples Audio Examples ACE Step Model. ; ComfyUI Manager and Custom-Scripts: These tools come pre-installed to enhance the functionality and customization of your applications. py A very short example is that when doing (masterpiece:1. Download the ace_step_v1_3. Note that in ComfyUI txt2img and img2img are the same node. 2) (best:1. There is a high possibility that the existing components created may not be compatible ComfyUI nodes and helper nodes for different tasks. This image contain 4 different areas: night, evening, day, morning. Topics Trending For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. This repo contains examples of what is achievable with ComfyUI. Our API is designed to help developers focus on creating innovative AI experiences without the burden of managing GPU infrastructure. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. The easiest way to update ComfyUI is through the ComfyUI Manager. 0 checkpoint model. LCM loras are loras that can be used to convert a regular model to a LCM model. # This implementation is inspired by and based on the work of 2kpr. This repository provides the official ComfyUI native node for InfiniteYou with FLUX. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: You can apply multiple hypernetworks by chaining multiple Hypernetwork Loader nodes in sequence. Aug 7, 2024 · I have the same problem on a Macbook Pro M3 Max running MacOS Sonoma. The primary focus is to showcase how developers can get started creating applications running ComfyUI workflows using Comfy Deploy. 1 is a family of video models. Releases a new stable version (e. Contribute to 4rmx/comfyui-api-ws development by creating an account on GitHub. Reload to refresh your session. LCM Examples. If you have another Stable Diffusion UI you might be able to reuse the dependencies. Below is an example video generated using the AnimateLCM-FaceID. Contribute to kijai/ComfyUI-FramePackWrapper development by creating an account on GitHub. 1-dev: An open-source text-to-image model that powers your conversions. 06) (quality:1. Download the Lumina 2. ComfyUI is a no-code interface designed to simplify interactions with complex AI models, particularly those used in image and video generation. Communicate with ComfyUI via API and Websocket. Here is an example for outpainting: Redux SD3 Examples SD3. You can serve on discord, or ComfyUI follows a weekly release cycle every Friday, with three interconnected repositories: ComfyUI Core. py A custom node is defined using a Python class, which must include these four things: CATEGORY, which specifies where in the add new node menu the custom node will be located, INPUT_TYPES, which is a class method defining what inputs the node will take (see later for details of the dictionary returned), RETURN_TYPES, which defines what outputs the node will produce, and FUNCTION, the name of Apr 14, 2025 · Contribute to cubiq/ComfyUI_IPAdapter_plus development by creating an account on GitHub. 3D Examples Stable Zero123. LCM Lora. Some code bits are inspired by other modules, some are custom-built for ease of use and incorporation with PonyXL v6. execute() OUTPUT_NODE ([`bool`]): If this node is an output node that outputs a result/image from the graph. Find and fix vulnerabilities Actions. Abstract (click to expand) Achieving flexible and high-fidelity identity-preserved image generation remains formidable, particularly with advanced Diffusion Transformers (DiTs) like FLUX. safetensors and vae to run FLUX. py --force-fp16. safetensors, stable_cascade_inpainting. ; Flux. LTX-Video is a very efficient video model by lightricks. safetensors or clip_l. safetensors already in your ComfyUI/models/text_encoders/ directory you can find them on: this link. Contribute to gonzalu/ComfyUI_YFG_Comical development by creating an account on GitHub. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. ComfyUI offers this option through the "Latent From Batch" node. Many optimizations: Only re-executes the parts of the workflow that changes between executions. json workflow. The prompt used is sourced from OpenAI's Sora: The prompt used is sourced from OpenAI's Sora: "A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. This way frames further away from the init frame get a gradually higher cfg. Put the model file in the folder ComfyUI > models > checkpoints. Here is an example. You signed out in another tab or window. It also demonstrates how you can run comfy wokrflows behind a user interface - synthhaven/learn_comfyui_apps This project is designed to demonstrate the integration and utilization of the ComfyDeploy SDK within a Next. Here is an example for how to use the Canny Controlnet: Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. The first step Flux or other models: (clip_l. Sep 3, 2024 · If your interface has a fixed form, you need to use a method where you extract the prompt in the form of an API export from the corresponding workflow, modify that prompt, and then send the API request. - comfyanonymous/ComfyUI ReActorBuildFaceModel Node got "face_model" output to provide a blended face model directly to the main Node: Basic workflow 💾. I think the old repo isn't good enough to maintain. YOU NEED TO REMOVE comfyui_controlnet_preprocessors BEFORE USING THIS REPO. Click Manager > Update All. ComfyUI-TeaCache is easy to use, simply connect the TeaCache node with the ComfyUI native nodes for seamless usage. I know there are obviously alternatives like Forge, but being able to make complex custom api workflows and then run th This is a rework of comfyui_controlnet_preprocessors based on ControlNet auxiliary models by 🤗. For example, if `FUNCTION = "execute"` then it will run Example(). The sample txt_2_img is given. ImageAssistedCFGGuider: Samples the conditioning, then adds in You signed in with another tab or window. json extension). ComfyUI has a lot of custom nodes but you will still have a special use case for which there's no custom nodes available. This project is designed to demonstrate the integration and utilization of the ComfyDeploy SDK within a Next. Maybe "CLIPLoader (GGUF)" node was the cause of the problem. /interrupt ComfyUI Custom Nodes. ScaledCFGGuider: Samples the two conditionings, then adds it using a method similar to "Add Trained Difference" from merging models. The a1111 ui is actually doing something like (but across all the tokens): (masterpiece:0. Please check A workflow to generate a cartoonish picture using a model and then upscale it and turn it into a realistic one by applying a different checkpoint and optionally different prompts. Nov 29, 2023 · Yes, I want to build a GUI using Vue, that grabs images created in input or output folders, and then lets the users call the API by filling out JSON templates that use the assets already in the comfyUI library. Wan 2. The example below shows how to use the KSampler in an image to image task, by connecting a model, a positive and negative embedding, and a latent image. This fork includes support for Document Visual Question Answering (DocVQA) using the Florence2 model. The noise parameter is an experimental exploitation of the IPAdapter models. The name of the worfklow sent in the inputs should be same as the name of the file (without the . Launch ComfyUI by running python main. 3 billion parameters), covering various tasks including text-to-video (T2V) and image-to-video (I2V). Either the model passes instructions when there is no prompt, or ConditioningZeroOut doesn't work and zero doesn't mean zero. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. py Jun 3, 2023 · And imagine doing it with more advanced flows - for example my basic setup for SDXL is 3 positive + 3 negative prompts (one for each text encoder: base G+, base G-, base L+, base L-, refiner+, refiner-). I'm running it using RTX 4070 Ti SUPER and system has 128GB of ram. Could this repository also add this feature? This way, we wouldn't need to search for workflows each time, but could instead find the relevant functionality directly within the ComfyUI interface. THESE TWO CONFLICT WITH EACH OTHER. Nvidia Cosmos Models. 8 . safetensors Implementation of MDM, MotionDiffuse and ReMoDiffuse into ComfyUI - Fannovel16/ComfyUI-MotionDiff Here is how you use it in ComfyUI (you can drag this into ComfyUI to get the workflow): noise_augmentation controls how closely the model will try to follow the image concept. 1. Install the ComfyUI dependencies. I noticed that it seems necessary to add the corresponding workflows to the example_workflows directory. This is a side project to experiment with using workflows as components. safetensors. This repo contains examples of what is achievable with ComfyUI. Step 2: Update ComfyUI. - Releases · comfyanonymous/ComfyUI Examples of ComfyUI workflows. Hypernetwork Examples. HiDream I1 is a state of the art image diffusion model. Features Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. - comfyanonymous/ComfyUI May 9, 2025 · What is ComfyUI? 1. Download the text encoder files: clip_l_hidream. The advanced node enables filtering the prompt for multi-pass workflows. For more information, see KLing AI API Documentation. Feb 18, 2025 · Using CFG means doing two passes through the model on each step, so it's a lot slower, and costs more memory. All the images in this page contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Use a LLM to generate the code you need, paste it in the node and voila!! you have your custom node which does exactly what you need. 1 model, open-sourced by Alibaba in February 2025, is a benchmark model in the field of video generation. 0 on ComfyUI Step 1: Download the Lumina model. So I can't test it right now. For example I want to install ComfyUI. Flux is a family of diffusion models by black forest labs. By chaining different blocks (called nodes) together, you can construct an image generation workflow. then. SDXL Examples. 1 ComfyUI install guidance, workflow and example. Then I may discover that ComfyUI on Windows works only with Nvidia cards and AMD needs Feb 18, 2025 · Saved searches Use saved searches to filter your results more quickly See what ComfyUI can do with the example workflows. DocVQA allows you to ask questions about the content of document images, and the model will provide answers based on the visual and textual information in the document. 0 license and offers two versions: 14B (14 billion parameters) and 1. Apr 22, 2024 · ComfyUI is a web-based Stable Diffusion interface optimized for workflow customization. From understanding its unique node - based interface to mastering intricate workflows for generating stunning images, each tutorial is crafted to enhance your proficiency. Area Composition Examples. Apr 7, 2025 · Note for Windows users: There’s a standalone build available on the ComfyUI page, which bundles Python and dependencies for a more straightforwa rd setup. safetensors and clip_l. Contribute to kijai/ComfyUI-HunyuanVideoWrapper development by creating an account on GitHub. Let try the model withou the clip. rfjnbx moraqmr ivoq yvsya oocu zsjsto cvk vbgosc gggqh ubpzfv