library_name: diffusers
base_model: runwayml/stable-diffusion-v1-5
tags:
- lora
- text-to-image
license: openrail++
inference: false
潜在一致性模型 (LCM) LoRA:SDv1-5
潜在一致性模型 (LCM) LoRA 由 Simian Luo、Yiqin Tan、Suraj Patil、Daniel Gu 等人 在论文 LCM-LoRA:通用的 Stable-Diffusion 加速模块 中提出。
它是为 runwayml/stable-diffusion-v1-5
设计的蒸馏一致性适配器,可将推理步骤减少至仅需 2 至 8 步。
使用方法
LCM-LoRA 从 🤗 Hugging Face Diffusers 库的 v0.23.0 版本开始支持。运行模型前,请先安装最新版本的 Diffusers 库以及 peft
、accelerate
和 transformers
。
pip install --upgrade pip
pip install --upgrade diffusers transformers accelerate peft
注意:详细使用示例请参考官方 LCM-LoRA 文档
文本生成图像
适配器可与 SDv1-5 或其衍生模型配合使用。这里我们使用 Lykon/dreamshaper-7
。接下来,需要将调度器更改为 LCMScheduler
,并将推理步骤减少至 2 至 8 步。
请确保禁用 guidance_scale
或将其值设为 1.0 到 2.0 之间。
import torch
from diffusers import LCMScheduler, AutoPipelineForText2Image
model_id = "Lykon/dreamshaper-7"
adapter_id = "latent-consistency/lcm-lora-sdv1-5"
pipe = AutoPipelineForText2Image.from_pretrained(model_id, torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.to("cuda")
pipe.load_lora_weights(adapter_id)
pipe.fuse_lora()
prompt = "自画像油画,一位美丽的金发赛博格,8k"
image = pipe(prompt=prompt, num_inference_steps=4, guidance_scale=0).images[0]

图像生成图像
LCM-LoRA 也可用于图像生成图像任务。以下是如何使用 LCM 进行图像生成图像的示例。本例中,我们将使用 dreamshaper-7 模型和 stable-diffusion-v1-5
的 LCM-LoRA。
import torch
from diffusers import AutoPipelineForImage2Image, LCMScheduler
from diffusers.utils import make_image_grid, load_image
pipe = AutoPipelineForImage2Image.from_pretrained(
"Lykon/dreamshaper-7",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
pipe.fuse_lora()
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
init_image = load_image(url)
prompt = "宇航员在丛林中,冷色调,柔和色彩,细节丰富,8k"
generator = torch.manual_seed(0)
image = pipe(
prompt,
image=init_image,
num_inference_steps=4,
guidance_scale=1,
strength=0.6,
generator=generator
).images[0]
make_image_grid([init_image, image], rows=1, cols=2)

修复
LCM-LoRA 也可用于修复任务。
import torch
from diffusers import AutoPipelineForInpainting, LCMScheduler
from diffusers.utils import load_image, make_image_grid
pipe = AutoPipelineForInpainting.from_pretrained(
"runwayml/stable-diffusion-inpainting",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
pipe.fuse_lora()
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")
prompt = "指环王启发的精灵城堡数字绘画概念艺术,高度细节化,8k"
generator = torch.manual_seed(0)
image = pipe(
prompt=prompt,
image=init_image,
mask_image=mask_image,
generator=generator,
num_inference_steps=4,
guidance_scale=4,
).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)

ControlNet
本例中,我们将使用 SD-v1-5 模型和 SD-v1-5 的 LCM-LoRA 配合 canny ControlNet。
import torch
import cv2
import numpy as np
from PIL import Image
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler
from diffusers.utils import load_image
image = load_image(
"https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
).resize((512, 512))
image = np.array(image)
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
controlnet=controlnet,
torch_dtype=torch.float16,
safety_checker=None,
variant="fp16"
).to("cuda")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
generator = torch.manual_seed(0)
image = pipe(
"蒙娜丽莎",
image=canny_image,
num_inference_steps=4,
guidance_scale=1.5,
controlnet_conditioning_scale=0.8,
cross_attention_kwargs={"scale": 1},
generator=generator,
).images[0]
make_image_grid([canny_image, image], rows=1, cols=2)

速度基准
待补充
训练
待补充