许可证: apache-2.0
许可证名称: qwen
许可证链接: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
标签:
- 视频
- 视频理解
- 视觉
- 多模态
- 对话式
- qwen
- 自定义代码
- 指令调优
数据集:
- ApolloBench
- Video-MME
- MLVU
- LongVideoBench
- NExTQA
- PerceptionTest
推理: true
管道标签: 视频文本到文本
Apollo: 大型多模态模型中的视频理解探索
Apollo是一系列推动视频理解前沿的大型多模态模型(LMMs),支持以下任务:
- 长视频内容理解
- 时序推理
- 复杂视频问答
- 基于视频内容的多轮对话
Apollo模型擅长处理长达一小时的视频,通过战略设计平衡速度与精度。我们的模型仅用30亿参数就超越多数70亿参数的竞品,甚至能与300亿级模型媲美。
核心亮点:
- 可扩展一致性:在小模型和数据集上验证的设计方案能有效迁移至更大规模,降低计算与实验成本
- 高效视频采样:fps采样与先进token重采样策略(如Perceiver)带来更强时序感知
- 编码器协同:SigLIP-SO400M(图像)与InternVideo2(视频)组合形成鲁棒表征,在时序任务上超越单一编码器
- ApolloBench:精简评估基准(提速41倍),专注真实视频理解能力评估
快速开始
安装:
pip install -e .
pip install flash-attn --no-build-isolation
推理示例:
import torch
from transformers import AutoModelForCausalLM
from apollo.mm_utils import (
KeywordsStoppingCriteria,
tokenizer_mm_token,
ApolloMMLoader
)
from apollo.conversations import conv_templates, SeparatorStyle
from huggingface_hub import snapshot_download
model_url = "Apollo-LMMs/Apollo-3B-t32"
model_path = snapshot_download(model_url, repo_type="model")
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
model_path,
trust_remote_code=True,
low_cpu_mem_usage=True
).to(device=device, dtype=torch.bfloat16)
tokenizer = model.tokenizer
vision_processors = model.vision_tower.vision_processor
config = model.config
num_repeat_token = config.mm_connector_cfg['num_output_tokens']
mm_processor = ApolloMMLoader(
vision_processors,
config.clip_duration,
frames_per_clip=4,
clip_sampling_ratio=0.65,
model_max_length=config.model_max_length,
device=device,
num_repeat_token=num_repeat_token
)
video_path = "视频路径.mp4"
question = "详细描述该视频内容"
mm_data, replace_string = mm_processor.load_video(video_path)
conv = conv_templates["qwen_2"].copy()
conv.append_message(conv.roles[0], replace_string + "\n\n" + question)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = tokenizer_mm_token(prompt, tokenizer, return_tensors="pt").unsqueeze(0).to(device)
stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
stopping_criteria = KeywordsStoppingCriteria([stop_str], tokenizer, input_ids)
with torch.inference_mode():
output_ids = model.generate(
input_ids,
vision_input=[mm_data],
data_types=['video'],
do_sample=True,
temperature=0.4,
max_new_tokens=256,
top_p=0.7,
use_cache=True,
num_beams=1,
stopping_criteria=[stopping_criteria]
)
pred = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
print(pred)
引用
如果本项目对您有帮助,请考虑引用:
@article{zohar2024apollo,
title={Apollo: 大型多模态模型中的视频理解探索},
author={Zohar, Orr and Wang, Xiaohan and Dubois, Yann and Mehta, Nikhil and Xiao, Tong and Hansen-Estruch, Philippe and Yu, Licheng and Wang, Xiaofang and Juefei-Xu, Felix and Zhang, Ning and Yeung-Levy, Serena and Xia, Xide},
journal={arXiv预印本 arXiv:2412.10360},
year={2024}
}
更多详情请访问项目网站或查阅论文。