[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new sampler DDIM CFG++ #16035

Open
wants to merge 1 commit into
base: dev
Choose a base branch
from
Open

Add new sampler DDIM CFG++ #16035

wants to merge 1 commit into from

Conversation

v0xie
Copy link
Contributor
@v0xie v0xie commented Jun 17, 2024

Description

This PR implements a new sampler "DDIM CFG++" derived from CFG++: Manifold-constrained Classifier Free Guidance for Diffusion Models (Chung et al., 2024).

The new sampler is modified from DDIM, with the main change being we use the unconditional noise to guide the denoising instead of the conditional noise.

Major changes:

  • Adds a new sampler "ddim_cfgpp"
  • Added a new field in CFGDenoiser: "last_noise_uncond". The new sampler gets the unconditional noise prediction from this field.
  • CFG scale is divided by 12.5 if the sampler has "CFG++" in name.
    • CFG scale maps to CFG++ scale: [1.0, 12.5] -> [0.0, 1.0].

Screenshots/videos:

  • SD 1.5
    Prompt: "A photo of a silver 1998 Toyota Camry."
    xyz_grid-0022-1485574128
  • SD XL
    Prompt: "A hyperrealistic portrait close-up photo of a smug man lighting a cigarette in his mouth in the city light at night in the rain with an explosion behind him."
    xyz_grid-0105-1766490836

Additional Links:

Official project page: https://cfgpp-diffusion.github.io/
Official code repository: https://github.com/CFGpp-diffusion/CFGpp
ArXiv: https://arxiv.org/abs/2406.08070

Checklist:

@AndreyRGW
Copy link

Error during generation. Also i see that controlnet is mentioned in the error, although it was not even used during generation.

    Traceback (most recent call last):
      File "F:\WBC\automatic1111_dev\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "F:\WBC\automatic1111_dev\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "F:\WBC\automatic1111_dev\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "F:\WBC\automatic1111_dev\modules\processing.py", line 845, in process_images
        res = process_images_inner(p)
      File "F:\WBC\automatic1111_dev\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "F:\WBC\automatic1111_dev\modules\processing.py", line 981, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "F:\WBC\automatic1111_dev\modules\processing.py", line 1328, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "F:\WBC\automatic1111_dev\modules\sd_samplers_timesteps.py", line 159, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\WBC\automatic1111_dev\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "F:\WBC\automatic1111_dev\modules\sd_samplers_timesteps.py", line 159, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\WBC\automatic1111_dev\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "F:\WBC\automatic1111_dev\modules\sd_samplers_timesteps_impl.py", line 51, in ddim_cfgpp
        alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(float64(x))
    NameError: name 'float64' is not defined

---

Maybe you should change line 51 in the modules/sd_samplers_timesteps_impl.py file from
"alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(float64(x))"
to
"alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(torch.float64 if x.device.type != 'mps' and x.device.type != 'xpu' else torch.float32)"

What do you think?

"""
alphas_cumprod = model.inner_model.inner_model.alphas_cumprod
alphas = alphas_cumprod[timesteps]
alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(float64(x))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(torch.float64 if x.device.type != 'mps' and x.device.type != 'xpu' else torch.float32) to prevent NameError: name 'float64' is not defined

@v0xie
Copy link
Contributor Author
v0xie commented Jun 18, 2024

Error during generation. Also i see that controlnet is mentioned in the error, although it was not even used during generation.

    Traceback (most recent call last):
      File "F:\WBC\automatic1111_dev\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "F:\WBC\automatic1111_dev\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "F:\WBC\automatic1111_dev\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "F:\WBC\automatic1111_dev\modules\processing.py", line 845, in process_images
        res = process_images_inner(p)
      File "F:\WBC\automatic1111_dev\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "F:\WBC\automatic1111_dev\modules\processing.py", line 981, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "F:\WBC\automatic1111_dev\modules\processing.py", line 1328, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "F:\WBC\automatic1111_dev\modules\sd_samplers_timesteps.py", line 159, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\WBC\automatic1111_dev\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "F:\WBC\automatic1111_dev\modules\sd_samplers_timesteps.py", line 159, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\WBC\automatic1111_dev\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "F:\WBC\automatic1111_dev\modules\sd_samplers_timesteps_impl.py", line 51, in ddim_cfgpp
        alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(float64(x))
    NameError: name 'float64' is not defined

---

Maybe you should change line 51 in the modules/sd_samplers_timesteps_impl.py file from "alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(float64(x))" to "alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(torch.float64 if x.device.type != 'mps' and x.device.type != 'xpu' else torch.float32)"

What do you think?

The master branch is missing the float64 method from modules/torch_utils.py. I'd recommend testing on the dev branch for now.

Relevant commit: 9c8075b

@v0xie
Copy link
Contributor Author
v0xie commented Jun 19, 2024

It's possible this is something extensible to many of the existing samplers. Samples generated from my more_cfgpp branch https://github.com/v0xie/stable-diffusion-webui/tree/more_cfgpp
xyz_grid-0107-2

@viking1304
Copy link

The master branch is missing the float64 method from modules/torch_utils.py. I'd recommend testing on the dev branch for now.

Relevant commit: 9c8075b

Implementation of float64 in torch_utils added in #15815 does not work as intended. This function will always return torch.float64, even on mps and xpu.

I just posted #16058 with proper implementation.

So, if someone getting errors on Mac while testing this, apply my fix for torch_utils first.

@Panchovix
Copy link
Panchovix commented Jun 25, 2024

It's possible this is something extensible to many of the existing samplers. Samples generated from my more_cfgpp branch https://github.com/v0xie/stable-diffusion-webui/tree/more_cfgpp xyz_grid-0107-2

Can you point this fork into the dev branch? Or it is possible to make it like a schedule parameter? To be used like this
imagen

EDIT: Did a semi implementation here Panchovix@f8dfe20

Panchovix added a commit to Panchovix/stable-diffusion-webui-forge that referenced this pull request Jun 30, 2024
…art 1

Re-applied AUTOMATIC1111/stable-diffusion-webui#15333 into Forge, to use any scheduler with any sampler.

Also implemented CFG++ from AUTOMATIC1111/stable-diffusion-webui#16035 as a scheduler instead of a sampler.

Ported SD Turbo and Variance Preserving from https://github.com/lllyasviel/stable-diffusion-webui-forge/blob/main/ldm_patched/contrib/external_custom_sampler.py
Panchovix added a commit to Panchovix/stable-diffusion-webui that referenced this pull request Jun 30, 2024
Adds CFG++ Scheduler, making a mirror implementation of how CFG++ Sampler is implemented on AUTOMATIC1111#16035

Modified the code to make it work with k_diffusion samplers and not break compatibility with other schedulers.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants