许可协议:apache-2.0
标签:
SlimSAM:0.1%数据让分割万物模型轻量化
0.1%数据让分割万物模型轻量化
陈子庚、方功凡、马新寅、王新超
新加坡国立大学学习与视觉实验室
论文:[Arxiv]
代码:[GitHub]
简介
SlimSAM是一种创新的SAM模型压缩方法,通过统一剪枝-蒸馏框架高效复用预训练SAM,无需大量重复训练。我们采用交替瘦身策略将压缩过程分解为渐进步骤,增强原始SAM的知识传承。不同于传统剪枝技术,该方法对解耦模型结构进行交替剪枝与蒸馏。此外,提出无标签剪枝准则,使剪枝目标与优化方向对齐,从而提升剪枝后蒸馏效果。
相比原版SAM-H模型,SlimSAM仅需0.1%(10k)训练数据,即实现参数量降至0.9%(5.7M)、运算量降至0.8%(21G)的接近性能。大量实验表明,本方法在训练数据量不足其他SAM压缩方法1/10的情况下,仍能取得显著优越性能。
模型使用
快速加载本地均匀剪枝版SlimSAM-50模型参数:
model = SamModel.from_pretrained("Zigeng/SlimSAM-uniform-77").to("cuda")
processor = SamProcessor.from_pretrained("Zigeng/SlimSAM-uniform-77")
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]]
inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to("cuda")
outputs = model(**inputs)
masks = processor.image_processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu())
scores = outputs.iou_scores
引用SlimSAM
若研究中使用SlimSAM,请引用以下BibTeX条目:
@misc{chen202301,
title={0.1% Data Makes Segment Anything Slim},
author={Zigeng Chen and Gongfan Fang and Xinyin Ma and Xinchao Wang},
year={2023},
eprint={2312.05284},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
致谢
SAM(分割万物)[文献]
@article{kirillov2023segany,
title={Segment Anything},
author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
journal={arXiv:2304.02643},
year={2023}
}
Torch剪枝(DepGraph:面向任意结构剪枝)[文献]
@inproceedings{fang2023depgraph,
title={Depgraph: Towards any structural pruning},
author={Fang, Gongfan and Ma, Xinyin and Song, Mingli and Mi, Michael Bi and Wang, Xinchao},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={16091--16101},
year={2023}
}