许可协议: mit
库名称: timm
标签:
模型卡片:eva02_large_patch14_224.mim_m38m
一款EVA02特征/表示模型。由论文作者在Merged-38M数据集(包含IN-22K、CC12M、CC3M、COCO(训练集)、ADE20K(训练集)、Object365和OpenImages)上通过掩码图像建模(使用EVA-CLIP作为MIM教师)进行预训练。
EVA-02模型采用视觉Transformer架构,包含均值池化、SwiGLU激活函数、旋转位置嵌入(ROPE)以及MLP中额外的层归一化(针对Base和Large版本)。
注:为与其他模型保持一致,timm
的检查点采用float32格式。原始检查点在部分情况下可能为float16或bfloat16,如需使用可参考原始版本。
模型详情
- 模型类型: 图像分类/特征主干网络
- 模型统计:
- 参数量(百万): 303.3
- 计算量(GMACs): 81.1
- 激活值(百万): 97.2
- 图像尺寸: 224 × 224
- 相关论文:
- EVA-02: 新世纪视觉表示法: https://arxiv.org/abs/2303.11331
- EVA-CLIP: 大规模CLIP训练的改进技术: https://arxiv.org/abs/2303.15389
- 原始资源:
- https://github.com/baaivision/EVA
- https://huggingface.co/Yuxin-CV/EVA-02
- 预训练数据集: ImageNet-22k
模型使用
图像分类
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('eva02_large_patch14_224.mim_m38m', pretrained=True)
model = model.eval()
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0))
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
图像嵌入
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eva02_large_patch14_224.mim_m38m',
pretrained=True,
num_classes=0,
)
model = model.eval()
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0))
output = model.forward_features(transforms(img).unsqueeze(0))
output = model.forward_head(output, pre_logits=True)
模型对比
在timm的模型结果中探索该模型的数据集和运行时指标。
模型 |
Top1准确率 |
Top5准确率 |
参数量(百万) |
图像尺寸 |
eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |
90.054 |
99.042 |
305.08 |
448 |
eva02_large_patch14_448.mim_in22k_ft_in22k_in1k |
89.946 |
99.01 |
305.08 |
448 |
eva_giant_patch14_560.m30m_ft_in22k_in1k |
89.792 |
98.992 |
1014.45 |
560 |
eva02_large_patch14_448.mim_in22k_ft_in1k |
89.626 |
98.954 |
305.08 |
448 |
eva02_large_patch14_448.mim_m38m_ft_in1k |
89.57 |
98.918 |
305.08 |
448 |
eva_giant_patch14_336.m30m_ft_in22k_in1k |
89.56 |
98.956 |
1013.01 |
336 |
eva_giant_patch14_336.clip_ft_in1k |
89.466 |
98.82 |
1013.01 |
336 |
eva_large_patch14_336.in22k_ft_in22k_in1k |
89.214 |
98.854 |
304.53 |
336 |
eva_giant_patch14_224.clip_ft_in1k |
88.882 |
98.678 |
1012.56 |
224 |
eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |
88.692 |
98.722 |
87.12 |
448 |
eva_large_patch14_336.in22k_ft_in1k |
88.652 |
98.722 |
304.53 |
336 |
eva_large_patch14_196.in22k_ft_in22k_in1k |
88.592 |
98.656 |
304.14 |
196 |
eva02_base_patch14_448.mim_in22k_ft_in1k |
88.23 |
98.564 |
87.12 |
448 |
eva_large_patch14_196.in22k_ft_in1k |
87.934 |
98.504 |
304.14 |
196 |
eva02_small_patch14_336.mim_in22k_ft_in1k |
85.74 |
97.614 |
22.13 |
336 |
eva02_tiny_patch14_336.mim_in22k_ft_in1k |
80.658 |
95.524 |
5.76 |
336 |
引用
@article{EVA02,
title={EVA-02: 新世纪视觉表示法},
author={方玉新 and 孙权 and 王兴刚 and 黄铁军 and 王新龙 and 曹越},
journal={arXiv预印本 arXiv:2303.11331},
year={2023}
}
@article{EVA-CLIP,
title={EVA-02: 新世纪视觉表示法},
author={孙权 and 方玉新 and Wu, Ledell and 王新龙 and 曹越},
journal={arXiv预印本 arXiv:2303.15389},
year={2023}
}
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch图像模型库},
year = {2019},
publisher = {GitHub},
journal = {GitHub仓库},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}