标签:
- 剪辑
许可证: apache-2.0
语言:
- 英文
库名称: transformers
管道标签: 零样本图像分类
FG-CLIP: 细粒度视觉与文本对齐
FG-CLIP: 细粒度视觉与文本对齐
谢春雨*,王斌*,孔繁静,李金成,梁大伟,张庚申,冷大伟†,尹玉辉(*同等贡献,†通讯作者)



模型框架
FG-CLIP的训练分为两个阶段:第一阶段利用全局级别的标题-图像对实现初步的细粒度对齐,第二阶段则补充区域级别的标题,包括详细的区域描述和正/负区域描述,以进一步优化对齐效果。
快速开始 ü§ó
加载模型
import torch
from PIL import Image
from transformers import (
AutoImageProcessor,
AutoTokenizer,
AutoModelForCausalLM,
)
model_root = "qihoo360/fg-clip-base"
image_size=224
model = AutoModelForCausalLM.from_pretrained(model_root,trust_remote_code=True).cuda()
device = model.device
tokenizer = AutoTokenizer.from_pretrained(model_root)
image_processor = AutoImageProcessor.from_pretrained(model_root)
检索
img_root = "FG-CLIP/use_imgs/cat_dfclor.jpg"
image = Image.open(img_root).convert("RGB")
image = image.resize((image_size,image_size))
image_input = image_processor.preprocess(image, return_tensors='pt')['pixel_values'].to(device)
# 注意:短标题:max_length=77 && walk_short_pos=True
walk_short_pos = True
captions=["一张猫的照片", "一张狗的照片"]
caption_input = torch.tensor(tokenizer(captions, max_length=77, padding="max_length", truncation=True).input_ids, dtype=torch.long, device=device)
# 注意:长标题:max_length=248 && walk_short_pos=False
# ......
with torch.no_grad():
image_feature = model.get_image_features(image_input)
text_feature = model.get_text_features(caption_input,walk_short_pos=walk_short_pos)
image_feature = image_feature / image_feature.norm(p=2, dim=-1, keepdim=True)
text_feature = text_feature / text_feature.norm(p=2, dim=-1, keepdim=True)
logits_per_image = image_feature @ text_feature.T
logits_per_image = model.logit_scale.exp() * logits_per_image
probs = logits_per_image.softmax(dim=1)
print(probs)
# [[9.9997e-01, 3.3485e-05]]
密集特征效果展示
import math
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
img_root = "FG-CLIP/use_imgs/cat_dfclor.jpg"
image = Image.open(img_root).convert("RGB")
image = image.resize((image_size,image_size))
image_input = image_processor.preprocess(image, return_tensors='pt')['pixel_values'].to(device)
with torch.no_grad():
dense_image_feature = model.get_image_dense_features(image_input)
captions = ["白猫"]
caption_input = torch.tensor(tokenizer(captions, max_length=77, padding="max_length", truncation=True).input_ids, dtype=torch.long, device=device)
text_feature = model.get_text_features(caption_input,walk_short_pos=True)
text_feature = text_feature / text_feature.norm(p=2, dim=-1, keepdim=True)
dense_image_feature = dense_image_feature / dense_image_feature.norm(p=2, dim=-1, keepdim=True)
similarity = dense_image_feature.squeeze() @ text_feature.squeeze().T
similarity = similarity.cpu().numpy()
patch_size = int(math.sqrt(similarity.shape[0]))
original_shape = (patch_size, patch_size)
show_image = similarity.reshape(original_shape)
plt.figure(figsize=(6, 6))
plt.imshow(show_image)
plt.title('相似度可视化')
plt.axis('off')
plt.savefig("FG-CLIP/use_imgs/FGCLIP_dfcolor_cat.png")
引用
如果您发现FG-CLIP对您的研究和应用有帮助,请使用以下BibTeX引用:
@article{xie2025fgclip,
title={FG-CLIP: 细粒度视觉与文本对齐},
author={谢春雨 and 王斌 and 孔繁静 and 李金成 and 梁大伟 and 张庚申 and 冷大伟 and 尹玉辉},
year={2025},
eprint={2505.05071},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.05071},
}
许可证
本项目使用的部分数据集和检查点受其原始许可证约束。用户必须遵守这些原始许可证的所有条款和条件。
本项目内容本身遵循Apache许可证2.0。