许可证:apache-2.0
基础模型:stabilityai/stable-diffusion-xl-base-1.0
标签:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
推理:false
语言:
- en
任务标签:text-to-image
EcomXL 修复控制网络
EcomXL包含一系列专为电商场景优化的文生图扩散模型,基于Stable Diffusion XL开发。
针对电商需求,我们训练了修复控制网络来调控扩散模型。与通用修复控制网络不同,本模型通过实例掩码微调,有效防止前景外溢。
效果展示
以下案例使用AUTOMATIC1111/stable-diffusion-webui生成。
Diffusers调用方式
from diffusers import (
ControlNetModel,
StableDiffusionXLControlNetPipeline,
DDPMScheduler
)
from diffusers.utils import load_image
import torch
from PIL import Image
import numpy as np
def make_inpaint_condition(init_image, mask_image):
init_image = np.array(init_image.convert("RGB")).astype(np.float32) / 255.0
mask_image = np.array(mask_image.convert("L")).astype(np.float32) / 255.0
assert init_image.shape[0:1] == mask_image.shape[0:1], "输入图像与掩码尺寸需一致"
init_image[mask_image > 0.5] = -1.0
init_image = np.expand_dims(init_image, 0).transpose(0, 3, 1, 2)
init_image = torch.from_numpy(init_image)
return init_image
def add_fg(full_img, fg_img, mask_img):
full_img = np.array(full_img).astype(np.float32)
fg_img = np.array(fg_img).astype(np.float32)
mask_img = np.array(mask_img).astype(np.float32) / 255.
full_img = full_img * mask_img + fg_img * (1-mask_img)
return Image.fromarray(np.clip(full_img, 0, 255).astype(np.uint8))
controlnet = ControlNetModel.from_pretrained(
"alimama-creative/EcomXL_controlnet_inpaint",
use_safetensors=True,
)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
)
pipe.to("cuda")
pipe.scheduler = DDPMScheduler.from_config(pipe.scheduler.config)
image = load_image(
"https://huggingface.co/alimama-creative/EcomXL_controlnet_inpaint/resolve/main/images/inp_0.png"
)
mask = load_image(
"https://huggingface.co/alimama-creative/EcomXL_controlnet_inpaint/resolve/main/images/inp_1.png"
)
mask = Image.fromarray(255 - np.array(mask))
control_image = make_inpaint_condition(image, mask)
prompt="桌上的商品"
generator = torch.Generator(device="cuda").manual_seed(1234)
res_image = pipe(
prompt,
image=control_image,
num_inference_steps=25,
guidance_scale=7,
width=1024,
height=1024,
controlnet_conditioning_scale=0.5,
generator=generator,
).images[0]
res_image = add_fg(res_image, image, mask)
res_image.save(f'res.png')
当控制网络权重(controlnet_condition_scale)设为0.5时,模型表现最佳。
训练细节
第一阶段:使用1200万张laion2B及内部图像配合随机掩码训练2万步
第二阶段:使用300万张电商图像配合实例掩码微调2万步
混合精度:FP16
学习率:1e-4
批大小:2048
噪声偏移:0.05