库名称: diffusers
流程标签: 文本生成视频
标签:
AnimateDiff是一种方法,允许您使用现有的Stable Diffusion文本生成图像模型来创建视频。
它通过在冻结的文本生成图像模型中插入运动模块层并在视频片段上进行训练来实现这一点,以提取运动先验。
这些运动模块被应用于Stable Diffusion UNet中的ResNet和Attention块之后。它们的目的是在图像帧之间引入连贯的运动。为了支持这些模块,我们引入了MotionAdapter和UNetMotionModel的概念。这些概念为使用现有Stable Diffusion模型的运动模块提供了一种便捷的方式。
SparseControlNetModel是AnimateDiff的ControlNet实现。
ControlNet由Lvmin Zhang、Anyi Rao和Maneesh Agrawala在Adding Conditional Control to Text-to-Image Diffusion Models中提出。
ControlNet的SparseCtrl版本由Yuwei Guo、Ceyuan Yang、Anyi Rao、Maneesh Agrawala、Dahua Lin和Bo Dai在SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models中提出,用于在文本生成视频扩散模型中实现受控生成。
以下示例展示了如何利用运动模块和稀疏控制网络与现有的Stable Diffusion文本生成图像模型结合使用。
import torch
from diffusers import AnimateDiffSparseControlNetPipeline
from diffusers.models import AutoencoderKL, MotionAdapter, SparseControlNetModel
from diffusers.schedulers import DPMSolverMultistepScheduler
from diffusers.utils import export_to_gif, load_image
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
motion_adapter_id = "guoyww/animatediff-motion-adapter-v1-5-3"
controlnet_id = "guoyww/animatediff-sparsectrl-scribble"
lora_adapter_id = "guoyww/animatediff-motion-lora-v1-5-3"
vae_id = "stabilityai/sd-vae-ft-mse"
device = "cuda"
motion_adapter = MotionAdapter.from_pretrained(motion_adapter_id, torch_dtype=torch.float16).to(device)
controlnet = SparseControlNetModel.from_pretrained(controlnet_id, torch_dtype=torch.float16).to(device)
vae = AutoencoderKL.from_pretrained(vae_id, torch_dtype=torch.float16).to(device)
scheduler = DPMSolverMultistepScheduler.from_pretrained(
model_id,
subfolder="scheduler",
beta_schedule="linear",
algorithm_type="dpmsolver++",
use_karras_sigmas=True,
)
pipe = AnimateDiffSparseControlNetPipeline.from_pretrained(
model_id,
motion_adapter=motion_adapter,
controlnet=controlnet,
vae=vae,
scheduler=scheduler,
torch_dtype=torch.float16,
).to(device)
pipe.load_lora_weights(lora_adapter_id, adapter_name="motion_lora")
pipe.fuse_lora(lora_scale=1.0)
prompt = "赛博朋克城市的鸟瞰图,夜晚,霓虹灯,杰作,高质量"
negative_prompt = "低质量,最差质量,信箱格式"
image_files = [
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-1.png",
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-2.png",
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-3.png"
]
condition_frame_indices = [0, 8, 15]
conditioning_frames = [load_image(img_file) for img_file in image_files]
video = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
num_inference_steps=25,
conditioning_frames=conditioning_frames,
controlnet_conditioning_scale=1.0,
controlnet_frame_indices=condition_frame_indices,
generator=torch.Generator().manual_seed(1337),
).frames[0]
export_to_gif(video, "output.gif")