library_name: diffusers
pipeline_tag: 文本生成视频
tags:
AnimateDiff是一种利用现有Stable Diffusion文生图模型创建视频的方法。该方法通过向冻结的文生图模型中插入运动模块层,并在视频片段上进行训练以提取运动先验知识。这些运动模块被应用于Stable Diffusion UNet中的ResNet和注意力块之后,旨在实现图像帧间的连贯运动。为支持这些模块,我们引入了MotionAdapter和UNetMotionModel概念,作为现有Stable Diffusion模型使用运动模块的便捷方式。
SparseControlNetModel是专为AnimateDiff实现的ControlNet版本。ControlNet最初由Lvmin Zhang、Anyi Rao和Maneesh Agrawala在论文《为文生图扩散模型添加条件控制》中提出。而SparseCtrl版本则由Yuwei Guo、Ceyuan Yang等人在论文《SparseCtrl:为文生视频扩散模型添加稀疏控制》中引入,用于实现文生视频扩散模型的可控生成。
以下示例展示了如何将运动模块和稀疏ControlNet应用于现有Stable Diffusion文生图模型:
黑色衣服男子的面部特写,夜晚城市街道,背景虚化带烟花效果
|
|
import torch
from diffusers import AnimateDiffSparseControlNetPipeline
from diffusers.models import AutoencoderKL, MotionAdapter, SparseControlNetModel
from diffusers.schedulers import DPMSolverMultistepScheduler
from diffusers.utils import export_to_gif, load_image
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
motion_adapter_id = "guoyww/animatediff-motion-adapter-v1-5-3"
controlnet_id = "guoyww/animatediff-sparsectrl-rgb"
lora_adapter_id = "guoyww/animatediff-motion-lora-v1-5-3"
vae_id = "stabilityai/sd-vae-ft-mse"
device = "cuda"
motion_adapter = MotionAdapter.from_pretrained(motion_adapter_id, torch_dtype=torch.float16).to(device)
controlnet = SparseControlNetModel.from_pretrained(controlnet_id, torch_dtype=torch.float16).to(device)
vae = AutoencoderKL.from_pretrained(vae_id, torch_dtype=torch.float16).to(device)
scheduler = DPMSolverMultistepScheduler.from_pretrained(
model_id,
subfolder="scheduler",
beta_schedule="linear",
algorithm_type="dpmsolver++",
use_karras_sigmas=True,
)
pipe = AnimateDiffSparseControlNetPipeline.from_pretrained(
model_id,
motion_adapter=motion_adapter,
controlnet=controlnet,
vae=vae,
scheduler=scheduler,
torch_dtype=torch.float16,
).to(device)
pipe.load_lora_weights(lora_adapter_id, adapter_name="motion_lora")
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-firework.png")
video = pipe(
prompt="黑色衣服男子的面部特写,夜晚城市街道,背景虚化带烟花效果",
negative_prompt="低质量,劣质",
num_inference_steps=25,
conditioning_frames=image,
controlnet_frame_indices=[0],
controlnet_conditioning_scale=1.0,
generator=torch.Generator().manual_seed(42),
).frames[0]
export_to_gif(video, "output.gif")