模型介绍
内容详情
替代品
模型简介
模型特点
模型能力
使用案例
🚀 Pix2Struct - 在AI2D(科学图表视觉问答)上微调的模型卡
Pix2Struct是一个图像编码器 - 文本解码器模型,可处理图像文本对,适用于图像描述和视觉问答等多种任务。该模型在视觉问答任务上进行了微调,能有效处理与视觉相关的语言理解问题。
🚀 快速开始
模型简介
Pix2Struct是一个预训练的图像到文本模型,用于纯视觉语言理解,可在包含视觉语言的任务上进行微调。它通过学习将网页的掩码截图解析为简化的HTML进行预训练,网络丰富的视觉元素能很好地反映在HTML结构中,为预训练数据提供了大量来源,适合下游任务的多样性。
模型运行
在CPU上以全精度运行
你可以在CPU上以全精度运行该模型:
import requests
from PIL import Image
from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)
model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-base")
processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-base")
question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"
inputs = processor(images=image, text=question, return_tensors="pt")
predictions = model.generate(**inputs)
print(processor.decode(predictions[0], skip_special_tokens=True))
>>> ash cloud
在GPU上以全精度运行
import requests
from PIL import Image
from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)
model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-base").to("cuda")
processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-base")
question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"
inputs = processor(images=image, text=question, return_tensors="pt").to("cuda")
predictions = model.generate(**inputs)
print(processor.decode(predictions[0], skip_special_tokens=True))
>>> ash cloud
在GPU上以半精度运行
import requests
from PIL import Image
import torch
from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)
model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-base", torch_dtype=torch.bfloat16).to("cuda")
processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-base")
question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"
inputs = processor(images=image, text=question, return_tensors="pt").to("cuda", torch.bfloat16)
predictions = model.generate(**inputs)
print(processor.decode(predictions[0], skip_special_tokens=True))
>>> ash cloud
从T5x转换到Hugging Face
你可以使用convert_pix2struct_checkpoint_to_pytorch.py
脚本进行转换:
python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --is_vqa
如果你要转换大型模型,请运行:
python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large --is_vqa
保存后,你可以使用以下代码将转换后的模型推送到Hugging Face Hub:
from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE)
processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE)
model.push_to_hub("USERNAME/MODEL_NAME")
processor.push_to_hub("USERNAME/MODEL_NAME")
✨ 主要特性
- 多语言支持:支持英语、法语、罗马尼亚语、德语等多种语言。
- 视觉问答能力:经过微调,可处理视觉问答任务,能准确回答与图像相关的问题。
- 预训练策略新颖:通过将网页掩码截图解析为简化HTML进行预训练,适用于多种下游任务。
📦 安装指南
文档未提及具体安装步骤,可参考Hugging Face相关文档进行安装。
💻 使用示例
基础用法
在CPU上以全精度运行模型的示例:
import requests
from PIL import Image
from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)
model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-base")
processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-base")
question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"
inputs = processor(images=image, text=question, return_tensors="pt")
predictions = model.generate(**inputs)
print(processor.decode(predictions[0], skip_special_tokens=True))
>>> ash cloud
高级用法
在GPU上以半精度运行模型的示例:
import requests
from PIL import Image
import torch
from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)
model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-ai2d-base", torch_dtype=torch.bfloat16).to("cuda")
processor = Pix2StructProcessor.from_pretrained("google/pix2struct-ai2d-base")
question = "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"
inputs = processor(images=image, text=question, return_tensors="pt").to("cuda", torch.bfloat16)
predictions = model.generate(**inputs)
print(processor.decode(predictions[0], skip_special_tokens=True))
>>> ash cloud
📚 详细文档
模型摘要中提到:
视觉情境语言无处不在,其来源范围从带有图表的教科书到带有图像和表格的网页,再到带有按钮和表单的移动应用程序。由于这种多样性,以前的工作通常依赖于特定领域的方法,底层数据、模型架构和目标的共享有限。我们提出了Pix2Struct,这是一个预训练的图像到文本模型,用于纯视觉语言理解,可在包含视觉情境语言的任务上进行微调。Pix2Struct通过学习将网页的掩码截图解析为简化的HTML进行预训练。网络丰富的视觉元素能很好地反映在HTML结构中,为预训练数据提供了大量来源,适合下游任务的多样性。直观地说,这个目标包含了常见的预训练信号,如OCR、语言建模、图像描述。除了新颖的预训练策略,我们还引入了可变分辨率输入表示和更灵活的语言与视觉输入集成,其中问题等语言提示直接渲染在输入图像之上。我们首次表明,一个单一的预训练模型可以在四个领域(文档、插图、用户界面和自然图像)的九个任务中的六个任务中取得最先进的结果。
🔧 技术细节
文档未提及具体技术细节。
📄 许可证
本模型使用Apache 2.0许可证。
👥 贡献者
该模型最初由Kenton Lee、Mandar Joshi等人贡献,并由Younes Belkada添加到Hugging Face生态系统中。
📖 引用
如果你想引用这项工作,请引用原始论文:
@misc{https://doi.org/10.48550/arxiv.2210.03347,
doi = {10.48550/ARXIV.2210.03347},
url = {https://arxiv.org/abs/2210.03347},
author = {Lee, Kenton and Joshi, Mandar and Turc, Iulia and Hu, Hexiang and Liu, Fangyu and Eisenschlos, Julian and Khandelwal, Urvashi and Shaw, Peter and Chang, Ming-Wei and Toutanova, Kristina},
keywords = {Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}





精选推荐AI模型



