许可协议:Apache-2.0
标签:
SlimSAM:0.1%数据让分割万物模型轻量化
0.1%数据让分割万物模型轻量化
陈子庚、方功凡、马新寅、王新超
学习与视觉实验室,新加坡国立大学
论文:[Arxiv]
代码:[SlimSAM]
简介
SlimSAM是一种创新的SAM模型压缩方法,通过统一的剪枝-蒸馏框架高效复用预训练SAM,无需大量重复训练。我们采用创新的交替瘦身策略,将压缩过程分解为渐进式步骤,以增强原始SAM的知识传承。与传统剪枝技术不同,我们以交替方式精细剪枝并蒸馏解耦的模型结构。此外,还提出了一种无标签剪枝标准,使剪枝目标与优化指标对齐,从而提升剪枝后的蒸馏效果。
相比原始SAM-H模型,SlimSAM在参数量降至0.9%(570万)、运算量降至0.8%(210亿次)的情况下,仅需0.1%(1万张)训练数据即可达到相近性能。大量实验表明,与其他SAM压缩方法相比,本方法在训练数据量减少10倍以上的情况下仍能实现显著更优的性能。
模型使用
快速加载本地均匀剪枝版SlimSAM-50模型参数:
model = SamModel.from_pretrained("Zigeng/SlimSAM-uniform-50").to("cuda")
processor = SamProcessor.from_pretrained("Zigeng/SlimSAM-uniform-50")
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]]
inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to("cuda")
outputs = model(**inputs)
masks = processor.image_processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu())
scores = outputs.iou_scores
SlimSAM引用文献
若您在研究中使用SlimSAM,请引用以下文献,感谢!
@misc{chen202301,
title={0.1% Data Makes Segment Anything Slim},
author={Zigeng Chen and Gongfan Fang and Xinyin Ma and Xinchao Wang},
year={2023},
eprint={2312.05284},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
致谢
SAM(分割万物)[文献]
@article{kirillov2023segany,
title={Segment Anything},
author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
journal={arXiv:2304.02643},
year={2023}
}
Torch剪枝(DepGraph:面向任意结构剪枝)[文献]
@inproceedings{fang2023depgraph,
title={Depgraph: Towards any structural pruning},
author={Fang, Gongfan and Ma, Xinyin and Song, Mingli and Mi, Michael Bi and Wang, Xinchao},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={16091--16101},
year={2023}
}