标签:
- mmeb
- transformers
- sentence-transformers
语言:
- 英文
- 阿拉伯文
- 中文
- 韩文
- 俄文
- 波兰文
- 土耳其文
- 法文
库名称: transformers
许可证: mit
管道标签: 零样本图像分类
mmE5-mllama-11b-instruct
mmE5: 通过高质量合成数据改进多模态多语言嵌入。陈浩南、王亮、杨楠、朱宇涛、赵子良、韦福如、窦志成,arXiv 2025
此模型基于Llama-3.2-11B-Vision训练。
GitHub
训练/评估数据
- 训练数据: https://huggingface.co/datasets/intfloat/mmE5-MMEB-hardneg, https://huggingface.co/datasets/intfloat/mmE5-synthetic
- 评估数据: https://huggingface.co/datasets/TIGER-Lab/MMEB-eval, https://huggingface.co/datasets/Haon-Chen/XTD-10
实验结果
我们的模型在MMEB基准测试中达到了最先进的性能。

使用方式
Transformers
以下是我们从VLM2Vec改编的示例。
import torch
import requests
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor
def last_pooling(last_hidden_state, attention_mask, normalize=True):
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_state.shape[0]
reps = last_hidden_state[torch.arange(batch_size, device=last_hidden_state.device), sequence_lengths]
if normalize:
reps = torch.nn.functional.normalize(reps, p=2, dim=-1)
return reps
def compute_similarity(q_reps, p_reps):
return torch.matmul(q_reps, p_reps.transpose(0, 1))
model_name = "intfloat/mmE5-mllama-11b-instruct"
processor = AutoProcessor.from_pretrained(model_name)
model = MllamaForConditionalGeneration.from_pretrained(
model_name, torch_dtype=torch.bfloat16
).to("cuda")
model.eval()
image = Image.open(requests.get('https://github.com/haon-chen/mmE5/blob/main/figures/example.jpg?raw=true', stream=True).raw)
inputs = processor(text='<|image|><|begin_of_text|>用以下问题表示给定图像:图像中有什么\n', images=[image], return_tensors="pt").to("cuda")
qry_output = last_pooling(model(**inputs, return_dict=True, output_hidden_states=True).hidden_states[-1], inputs['attention_mask'])
string = '一只猫和一只狗'
text_inputs = processor(text=string, return_tensors="pt").to("cuda")
tgt_output = last_pooling(model(**text_inputs, return_dict=True, output_hidden_states=True).hidden_states[-1], text_inputs['attention_mask'])
print(string, '=', compute_similarity(qry_output, tgt_output))
string = '一只猫和一只老虎'
text_inputs = processor(text=string, return_tensors="pt").to("cuda")
tgt_output = last_pooling(model(**text_inputs, return_dict=True, output_hidden_states=True).hidden_states[-1], text_inputs['attention_mask'])
print(string, '=', compute_similarity(qry_output, tgt_output))
inputs = processor(text='找到一个与给定标题匹配的日常图像:一只猫和一只狗。\n', return_tensors="pt").to("cuda")
qry_output = last_pooling(model(**inputs, return_dict=True, output_hidden_states=True).hidden_states[-1], inputs['attention_mask'])
string = '<|image|><|begin_of_text|>表示给定图像。\n'
tgt_inputs = processor(text=string, images=[image], return_tensors="pt").to("cuda")
tgt_output = last_pooling(model(**tgt_inputs, return_dict=True, output_hidden_states=True).hidden_states[-1], tgt_inputs['attention_mask'])
print(string, '=', compute_similarity(qry_output, tgt_output))
inputs = processor(text='找到一个与给定标题匹配的日常图像:一只猫和一只老虎。\n', return_tensors="pt").to("cuda")
qry_output = last_pooling(model(**inputs, return_dict=True, output_hidden_states=True).hidden_states[-1], inputs['attention_mask'])
string = '<|image|><|begin_of_text|>表示给定图像。\n'
tgt_inputs = processor(text=string, images=[image], return_tensors="pt").to("cuda")
tgt_output = last_pooling(model(**tgt_inputs, return_dict=True, output_hidden_states=True).hidden_states[-1], tgt_inputs['attention_mask'])
print(string, '=', compute_similarity(qry_output, tgt_output))
Sentence Transformers
你也可以使用Sentence Transformers,其中大部分预处理和后处理已被抽象化。
from sentence_transformers import SentenceTransformer
import requests
model = SentenceTransformer("intfloat/mmE5-mllama-11b-instruct", trust_remote_code=True)
dog_cat_image_bytes = requests.get('https://github.com/haon-chen/mmE5/blob/main/figures/example.jpg?raw=true', stream=True).raw.read()
with open("cat_dog_example.jpg", "wb") as f:
f.write(dog_cat_image_bytes)
image_embeddings = model.encode([{
"image": "cat_dog_example.jpg",
"text": "用以下问题表示给定图像:图像中有什么",
}])
text_embeddings = model.encode([
{"text": "一只猫和一只狗"},
{"text": "一只猫和一只老虎"},
])
similarity = model.similarity(image_embeddings, text_embeddings)
print(similarity)
image_embeddings = model.encode([
{"image": dog_cat_image_bytes, "text": "表示给定图像。"},
])
text_embeddings = model.encode([
{"text": "找到一个与给定标题匹配的日常图像:一只猫和一只狗。"},
{"text": "找到一个与给定标题匹配的日常图像:一只猫和一只老虎。"},
])
similarity = model.similarity(image_embeddings, text_embeddings)
print(similarity)
引用
@article{chen2025mmE5,
title={mmE5: 通过高质量合成数据改进多模态多语言嵌入},
author={陈浩南 and 王亮 and 杨楠 and 朱宇涛 and 赵子良 and 韦福如 and 窦志成},
journal={arXiv preprint arXiv:2502.08468},
year={2025}
}