许可协议: MIT
标签:
- 视觉
- 图像分割
小部件:
- 示例图片: >-
https://images.unsplash.com/photo-1643310325061-2beef64926a5?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8Nnx8cmFjb29uc3xlbnwwfHwwfHw%3D&w=1000&q=80
示例标题: 人物
- 示例图片: >-
https://freerangestock.com/sample/139043/young-man-standing-and-leaning-on-car.jpg
示例标题: 人物
数据集:
- mattmdjaga/human_parsing_dataset
管道标签: 图像分割
用于服装分割的Segformer B3微调模型
基于ATR数据集微调的SegFormer模型,主要用于服装分割,但也可用于人体分割。在Hugging Face上的数据集名为"mattmdjaga/human_parsing_dataset"。
最新动态 -
训练代码。目前仅包含带有注释的纯代码,但很快将添加Colab笔记本版本和配套博客文章,使其更易使用。
from transformers import SegformerImageProcessor, AutoModelForSemanticSegmentation
from PIL import Image
import requests
import matplotlib.pyplot as plt
import torch.nn as nn
processor = SegformerImageProcessor.from_pretrained("sayeed99/segformer_b3_clothes")
model = AutoModelForSemanticSegmentation.from_pretrained("sayeed99/segformer_b3_clothes")
url = "https://plus.unsplash.com/premium_photo-1673210886161-bfcc40f54d1f?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8cGVyc29uJTIwc3RhbmRpbmd8ZW58MHx8MHx8&w=1000&q=80"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits.cpu()
upsampled_logits = nn.functional.interpolate(
logits,
size=image.size[::-1],
mode="bilinear",
align_corners=False,
)
pred_seg = upsampled_logits.argmax(dim=1)[0]
plt.imshow(pred_seg)
标签: 0: "背景", 1: "帽子", 2: "头发", 3: "太阳镜", 4: "上衣", 5: "裙子", 6: "裤子", 7: "连衣裙", 8: "腰带", 9: "左鞋", 10: "右鞋", 11: "面部", 12: "左腿", 13: "右腿", 14: "左臂", 15: "右臂", 16: "包", 17: "围巾"
评估结果
标签索引 |
标签名称 |
类别准确率 |
类别IoU |
0 |
背景 |
0.99 |
0.99 |
1 |
帽子 |
0.73 |
0.68 |
2 |
头发 |
0.91 |
0.82 |
3 |
太阳镜 |
0.73 |
0.63 |
4 |
上衣 |
0.87 |
0.78 |
5 |
裙子 |
0.76 |
0.65 |
6 |
裤子 |
0.90 |
0.84 |
7 |
连衣裙 |
0.74 |
0.55 |
8 |
腰带 |
0.35 |
0.30 |
9 |
左鞋 |
0.74 |
0.58 |
10 |
右鞋 |
0.75 |
0.60 |
11 |
面部 |
0.92 |
0.85 |
12 |
左腿 |
0.90 |
0.82 |
13 |
右腿 |
0.90 |
0.81 |
14 |
左臂 |
0.86 |
0.74 |
15 |
右臂 |
0.82 |
0.73 |
16 |
包 |
0.91 |
0.84 |
17 |
围巾 |
0.63 |
0.29 |
整体评估指标:
- 评估损失: 0.15
- 平均准确率: 0.80
- 平均IoU: 0.69
许可协议
本模型的许可协议可在此处查看。
BibTeX引用信息
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}