base_model:
- Qwen/Qwen2.5-7B-Instruct
datasets:
- THUdyh/Oryx-SFT-Data
language:
- en
- zh
license: apache-2.0
pipeline_tag: video-text-to-text
library_name: oryx
Oryx-1.5-7B
模型概述
Oryx-1.5系列模型是基于Qwen2.5语言模型开发的7B/32B参数模型,训练数据来自Oryx-SFT-Data,支持32K tokens的上下文窗口。
Oryx提供按需解决方案,可高效处理任意空间尺寸和时长的视觉输入。
- 代码仓库: https://github.com/Oryx-mllm/Oryx
- 项目主页: https://oryx-mllm.github.io
- 支持语言: 英语、中文
- 论文地址: https://arxiv.org/abs/2409.12961
使用方式
我们提供基础调用示例,更多细节请参考Github仓库
from oryx.model.builder import load_pretrained_model
from oryx.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
from oryx.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN, IGNORE_INDEX
from oryx.conversation import conv_templates, SeparatorStyle
from PIL import Image
import requests
import copy
import torch
import sys
import warnings
from decord import VideoReader, cpu
import numpy as np
def load_video(self, video_path, max_frames_num,fps=1,force_sample=False):
if max_frames_num == 0:
return np.zeros((1, 336, 336, 3))
vr = VideoReader(video_path, ctx=cpu(0),num_threads=1)
total_frame_num = len(vr)
video_time = total_frame_num / vr.get_avg_fps()
fps = round(vr.get_avg_fps()/fps)
frame_idx = [i for i in range(0, len(vr), fps)]
frame_time = [i/fps for i in frame_idx]
if len(frame_idx) > max_frames_num or force_sample:
sample_fps = max_frames_num
uniform_sampled_frames = np.linspace(0, total_frame_num - 1, sample_fps, dtype=int)
frame_idx = uniform_sampled_frames.tolist()
frame_time = [i/vr.get_avg_fps() for i in frame_idx]
frame_time = ",".join([f"{i:.2f}s" for i in frame_time])
spare_frames = vr.get_batch(frame_idx).asnumpy()
return spare_frames,frame_time,video_time
pretrained = "THUdyh/Oryx-7B"
model_name = "oryx_qwen"
device = "cuda"
device_map = "auto"
tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, device_map=device_map)
model.eval()
video_path = ""
max_frames_num = "64"
video,frame_time,video_time = load_video(video_path, max_frames_num, 1, force_sample=True)
video = image_processor.preprocess(video, return_tensors="pt")["pixel_values"].cuda().bfloat16()
video = [video]
video_data = (video, video)
input_data = (video_data, (384, 384), "video")
conv_template = "qwen_1_5"
question = DEFAULT_IMAGE_TOKEN + "\n请详细描述这个视频内容。"
conv = copy.deepcopy(conv_templates[conv_template])
conv.append_message(conv.roles[0], question)
conv.append_message(conv.roles[1], None)
prompt_question = conv.get_prompt()
input_ids = tokenizer_image_token(prompt_question, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(device)
output_ids = model.generate(
inputs=input_ids,
images=input_data[0][0],
images_highres=input_data[0][1],
modalities=video_data[2],
do_sample=False,
temperature=0,
max_new_tokens=128,
use_cache=True,
)
text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True)
print(text_outputs)
性能表现
通用视频基准测试
长视频理解
通用图像基准
3D空间理解
模型架构
- 架构: 预训练Oryx-ViT + Qwen2.5-7B
- 数据: 120万图像/视频混合数据
- 精度: BFloat16
硬件与软件
- 硬件: 64张NVIDIA Tesla A100
- 训练框架: HuggingFace Trainer
- 代码库: Pytorch
引用文献
@article{liu2024oryx,
title={Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution},
author={Liu, Zuyan and Dong, Yuhao and Liu, Ziwei and Hu, Winston and Lu, Jiwen and Rao, Yongming},
journal={arXiv preprint arXiv:2409.12961},
year={2024}
}