Commit Graph

  • 5715be2ca9
    Fix Hunyuan unet config detection for some models. (#6877) maedtb 2025-02-19 07:14:45 -05:00
  • 0d4d9222c6 Add early experimental SaveWEBM node to save .webm files. comfyanonymous 2025-02-19 07:11:49 -05:00
  • afc85cdeb6
    Add Load Image Output node (#6790) bymyself 2025-02-18 15:53:01 -07:00
  • acc152b674
    Support loading and using SkyReels-V1-Hunyuan-I2V (#6862) Jukka Seppänen 2025-02-19 00:06:54 +02:00
  • b07258cef2 Fix typo. comfyanonymous 2025-02-18 07:28:33 -05:00
  • 31e54b7052 Improve AMD arch detection. comfyanonymous 2025-02-17 04:53:40 -05:00
  • 8c0bae50c3 bf16 manual cast works on old AMD. comfyanonymous 2025-02-17 04:42:40 -05:00
  • 530412cb9d Refactor torch version checks to be more future proof. comfyanonymous 2025-02-17 04:36:45 -05:00
  • 61c8c70c6e
    support system prompt and cfg renorm in Lumina2 (#6795) Zhong-Yu Li 2025-02-17 07:15:43 +08:00
  • d0399f4343
    Update frontend to v1.9.18 (#6828) Comfy Org PR Bot 2025-02-17 01:45:47 +09:00
  • e2919d38b4 Disable bf16 on AMD GPUs that don't support it. comfyanonymous 2025-02-16 05:45:08 -05:00
  • 93c8607d51
    remove light_intensity and fov from load3d (#6742) Terry Jia 2025-02-15 15:34:36 -05:00
  • b3d6ae15b3
    Update frontend to v1.9.17 (#6814) Comfy Org PR Bot 2025-02-15 18:32:47 +09:00
  • 2e21122aab Add a node to set the model compute dtype for debugging. comfyanonymous 2025-02-15 04:15:37 -05:00
  • 1cd6cd6080 Disable pytorch attention in VAE for AMD. comfyanonymous 2025-02-14 05:42:14 -05:00
  • d7b4bf21a2 Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7 comfyanonymous 2025-02-14 04:17:56 -05:00
  • 042a905c37
    Open yaml files with utf-8 encoding for extra_model_paths.yaml (#6807) Robin Huang 2025-02-13 17:39:04 -08:00
  • 019c7029ea Add a way to set a different compute dtype for the model at runtime. comfyanonymous 2025-02-13 20:34:03 -05:00
  • 8773ccf74d Better memory estimation for ROCm that support mem efficient attention. comfyanonymous 2025-02-13 08:32:36 -05:00
  • 1d5d6586f3 Fix ruff. comfyanonymous 2025-02-12 06:49:16 -05:00
  • 35740259de
    mix_ascend_bf16_infer_err (#6794) zhoufan2956 2025-02-12 19:48:11 +08:00
  • ab888e1e0b Add add_weight_wrapper function to model patcher. comfyanonymous 2025-02-12 05:49:00 -05:00
  • d9f0fcdb0c Cleanup. comfyanonymous 2025-02-11 17:17:03 -05:00
  • b124256817
    Fix for running via DirectML (#6542) HishamC 2025-02-11 14:11:32 -08:00
  • af4b7c91be Make --force-fp16 actually force the diffusion model to be fp16. comfyanonymous 2025-02-11 08:31:46 -05:00
  • e57d2282d1
    Fix incorrect Content-Type for WebP images (#6752) bananasss00 2025-02-11 12:48:35 +03:00
  • 4027466c80 Make lumina model work with any latent resolution. comfyanonymous 2025-02-10 00:24:20 -05:00
  • 095d867147 Remove useless function. comfyanonymous 2025-02-09 07:01:38 -05:00
  • caeb27c3a5
    res_multistep: Fix cfgpp and add ancestral samplers (#6731) Pam 2025-02-09 05:39:58 +05:00
  • 3d06e1c555 Make error more clear to user. comfyanonymous 2025-02-08 18:57:24 -05:00
  • 43a74c0de1
    Allow FP16 accumulation with --fast (#6453) catboxanon 2025-02-08 17:00:56 -05:00
  • af93c8d1ee Document which text encoder to use for lumina 2. comfyanonymous 2025-02-08 06:54:03 -05:00
  • 832e3f5ca3
    Fix another small bug in attention_bias redux (#6737) Raphael Walker 2025-02-07 20:44:43 +01:00
  • 079eccc92a Don't compress http response by default. comfyanonymous 2025-02-07 03:29:12 -05:00
  • b6951768c4
    fix a bug in the attn_masked redux code when using weight=1.0 (#6721) Raphael Walker 2025-02-06 22:51:16 +01:00
  • fca304debf
    Update frontend to v1.8.14 (#6724) Comfy Org PR Bot 2025-02-07 00:43:10 +09:00
  • 14880e6dba Remove some useless code. comfyanonymous 2025-02-06 05:00:19 -05:00
  • f1059b0b82
    Remove unused GET /files API endpoint (#6714) Chenlei Hu 2025-02-05 18:48:36 -05:00
  • debabccb84 Bump ComfyUI version to v0.3.14 comfyanonymous 2025-02-05 15:47:46 -05:00
  • 37cd448529 Set the shift for Lumina back to 6. comfyanonymous 2025-02-05 14:49:52 -05:00
  • 94f21f9301 Upcasting rope to fp32 seems to make no difference in this model. comfyanonymous 2025-02-05 04:32:47 -05:00
  • 60653004e5 Use regular numbers for rope in lumina model. comfyanonymous 2025-02-05 04:16:59 -05:00
  • a57d635c5f Fix lumina 2 batches. comfyanonymous 2025-02-04 21:48:11 -05:00
  • 016b219dcc Add Lumina Image 2.0 to Readme. comfyanonymous 2025-02-04 08:08:36 -05:00
  • 8ac2dddeed Lower the default shift of lumina to reduce artifacts. comfyanonymous 2025-02-04 06:50:37 -05:00
  • 3e880ac709 Fix on python 3.9 comfyanonymous 2025-02-04 04:20:56 -05:00
  • e5ea112a90 Support Lumina 2 model. comfyanonymous 2025-02-04 03:56:00 -05:00
  • 8d88bfaff9
    allow searching for new .pt2 extension, which can contain AOTI compiled modules (#6689) Raphael Walker 2025-02-03 23:07:35 +01:00
  • ed4d92b721 Model merging nodes for cosmos. comfyanonymous 2025-02-03 03:31:39 -05:00
  • 932ae8d9ca
    Update frontend to v1.8.13 (#6682) Comfy Org PR Bot 2025-02-03 07:54:44 +09:00
  • 44e19a28d3 Use maximum negative value instead of -inf for masks in text encoders. comfyanonymous 2025-02-02 09:45:07 -05:00
  • 0a0df5f136
    better guide message for sageattention (#6634) Dr.Lt.Data 2025-02-02 23:26:47 +09:00
  • 24d6871e47
    add disable-compres-response-body cli args; add compress middleware; (#6672) KarryCharon 2025-02-02 22:24:55 +08:00
  • 9e1d301129 Only use stable cascade lora format with cascade model. comfyanonymous 2025-02-01 06:35:22 -05:00
  • 768e035868
    Add node for preview 3d animation (#6594) Terry Jia 2025-01-31 13:09:07 -05:00
  • 669e0497ea
    Update frontend to v1.8.12 (#6662) Comfy Org PR Bot 2025-02-01 03:07:37 +09:00
  • 541dc08547 Update Readme. comfyanonymous 2025-01-31 08:35:48 -05:00
  • 8d8dc9a262 Allow batch of different sigmas when noise scaling. comfyanonymous 2025-01-30 06:49:52 -05:00
  • 2f98c24360 Update Readme with link to instruction for Nvidia 50 series. comfyanonymous 2025-01-30 02:12:43 -05:00
  • ef85058e97 Bump ComfyUI version to v0.3.13 comfyanonymous 2025-01-29 16:07:12 -05:00
  • f9230bd357 Update the python version in some workflows. comfyanonymous 2025-01-29 15:54:13 -05:00
  • 537c27cbf3 Bump default cuda version in standalone package to 126. comfyanonymous 2025-01-29 08:13:33 -05:00
  • 6ff2e4d550 Remove logging call added in last commit. comfyanonymous 2025-01-29 08:08:01 -05:00
  • 222f48c0f2
    Allow changing folder_paths.base_path via command line argument. (#6600) filtered 2025-01-30 00:06:28 +11:00
  • 13fd4d6e45 More friendly error messages for corrupted safetensors files. comfyanonymous 2025-01-28 09:41:09 -05:00
  • 1210d094c7
    Convert latents_ubyte to 8-bit unsigned int before converting to CPU (#6300) Bradley Reynolds 2025-01-28 07:22:54 -06:00
  • 255edf2246 Lower minimum ratio of loaded weights on Nvidia. comfyanonymous 2025-01-27 05:26:51 -05:00
  • 4f011b9a00 Better CLIPTextEncode error when clip input is None. comfyanonymous 2025-01-26 06:04:57 -05:00
  • 67feb05299 Remove redundant code. comfyanonymous 2025-01-25 19:04:53 -05:00
  • 6d21740346 Print ComfyUI version. comfyanonymous 2025-01-25 15:03:57 -05:00
  • 7fbf4b72fe Update nightly pytorch ROCm command in Readme. comfyanonymous 2025-01-24 06:15:38 -05:00
  • 14ca5f5a10 Remove useless code. comfyanonymous 2025-01-24 06:15:05 -05:00
  • ce557cfb88
    Remove redundant code (#6576) filtered 2025-01-23 21:57:41 +11:00
  • 96e2a45193 Remove useless code. comfyanonymous 2025-01-23 05:56:23 -05:00
  • dfa2b6d129
    Remove unused function lcm in conds.py (#6572) Chenlei Hu 2025-01-23 05:54:09 -05:00
  • f3566f0894
    remove some params from load 3d node (#6436) Terry Jia 2025-01-22 17:23:51 -05:00
  • ca69b41cee
    Add utils/ to web server developer codeowner (#6570) Chenlei Hu 2025-01-22 17:16:54 -05:00
  • a058f52090
    [i18n] Add /i18n endpoint to provide all custom node translations (#6558) Chenlei Hu 2025-01-22 17:15:45 -05:00
  • d6bbe8c40f Remove support for python 3.8. comfyanonymous 2025-01-22 17:04:30 -05:00
  • a7fe0a94de Refactor and fixes for video latents. comfyanonymous 2025-01-22 06:37:46 -05:00
  • e857dd48b8
    Add gradient estimation sampler (#6554) chaObserv 2025-01-22 18:29:40 +08:00
  • d303cb5341 Add missing case to CLIPLoader. comfyanonymous 2025-01-21 08:57:04 -05:00
  • fb2ad645a3 Add FluxDisableGuidance node to disable using the guidance embed. comfyanonymous 2025-01-20 14:50:24 -05:00
  • d8a7a32779 Cleanup old TODO. comfyanonymous 2025-01-20 03:44:13 -05:00
  • a00e1489d2 LatentBatch fix for video latents comfyanonymous 2025-01-19 06:01:56 -05:00
  • ebf038d4fa
    Use torch.special.expm1 (#6388) Sergii Dymchenko 2025-01-19 01:54:32 -08:00
  • b4de04a1c1
    Update frontend to v1.7.14 (#6522) Comfy Org PR Bot 2025-01-19 11:43:37 +09:00
  • b1a02131c9
    Remove comfy.samplers self-import (#6506) catboxanon 2025-01-18 17:49:51 -05:00
  • 3a3910f91d
    PromptServer: Return 400 for empty filename param (#6504) catboxanon 2025-01-18 17:47:33 -05:00
  • 507199d9a8 Uni pc sampler now works with audio and video models. comfyanonymous 2025-01-18 05:27:58 -05:00
  • 2f3ab40b62 Add warning when using old pytorch versions. comfyanonymous 2025-01-17 18:47:27 -05:00
  • 7fc3ccdcc2 Add that nvidia cosmos is supported to the README. comfyanonymous 2025-01-16 21:17:18 -05:00
  • 55add50220 Bump ComfyUI version to v0.3.12 comfyanonymous 2025-01-16 18:11:57 -05:00
  • 0aa2368e46 Fix some cosmos fp8 issues. comfyanonymous 2025-01-16 17:45:37 -05:00
  • cca96a85ae Fix cosmos VAE failing with videos longer than 121 frames. comfyanonymous 2025-01-16 16:30:06 -05:00
  • 619b8cde74 Bump ComfyUI version to 0.3.11 comfyanonymous 2025-01-16 14:54:48 -05:00
  • 31831e6ef1 Code refactor. comfyanonymous 2025-01-16 07:23:54 -05:00
  • 88ceb28e20 Tweak hunyuan memory usage factor. comfyanonymous 2025-01-16 06:31:03 -05:00
  • 23289a6a5c Clean up some debug lines. comfyanonymous 2025-01-16 04:24:39 -05:00
  • 9d8b6c1f46 More accurate memory estimation for cosmos and hunyuan video. comfyanonymous 2025-01-16 03:48:40 -05:00