许可协议: mit
标签:
- 视觉
- 图像分割
数据集:
- scene_parse_150
示例展示:
- 图片链接: https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/ade20k.jpeg
示例标题: 房屋
- 图片链接: https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/demo_2.jpg
示例标题: 飞机
- 图片链接: https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/coco.jpeg
示例标题: 人物
OneFormer
基于ADE20k数据集训练的OneFormer模型(微型版本,Swin骨干网络)。该模型由Jain等人在论文《OneFormer: 一统图像分割领域的Transformer》中提出,并首次发布于此代码库。

模型描述
OneFormer是首个多任务通用图像分割框架。仅需通过单一通用架构、单一模型和单一数据集训练一次,即可在语义分割、实例分割和全景分割任务上超越现有专用模型。OneFormer采用任务令牌机制,使模型能够根据当前任务动态调整,实现训练时任务引导和推理时任务动态切换,所有功能均由单一模型完成。

应用场景与限制
该特定检查点可用于语义分割、实例分割和全景分割。访问模型中心可查看基于其他数据集微调的版本。
使用方法
使用方式如下:
from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/ade20k.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_ade20k_swin_tiny")
model = OneFormerForUniversalSegmentation.from_pretrained("shi-labs/oneformer_ade20k_swin_tiny")
semantic_inputs = processor(images=image, task_inputs=["semantic"], return_tensors="pt")
semantic_outputs = model(**semantic_inputs)
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
instance_inputs = processor(images=image, task_inputs=["instance"], return_tensors="pt")
instance_outputs = model(**instance_inputs)
predicted_instance_map = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
panoptic_inputs = processor(images=image, task_inputs=["panoptic"], return_tensors="pt")
panoptic_outputs = model(**panoptic_inputs)
predicted_semantic_map = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"]
更多示例请参阅文档。
引用
@article{jain2022oneformer,
title={{OneFormer: 一统图像分割领域的Transformer}},
author={Jitesh Jain and Jiachen Li and MangTik Chiu and Ali Hassani and Nikita Orlov and Humphrey Shi},
journal={arXiv},
year={2022}
}