license: apple-amlr
pipeline_tag: depth-estimation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
Depth Pro:亚秒级精准单目测距深度模型

我们推出了一款零样本单目测距深度估计基础模型。该模型Depth Pro能合成具有无与伦比锐度和高频细节的高分辨率深度图,其预测结果为带绝对尺度的度量值,且无需依赖相机内参等元数据。模型运行速度极快,在标准GPU上仅需0.3秒即可生成225万像素的深度图。这些特性得益于多项技术创新,包括:用于密集预测的高效多尺度视觉Transformer、结合真实与合成数据的训练方案(在实现高度量精度的同时保持精细边界追踪)、专为深度图边界精度设计的评估指标,以及从单幅图像进行焦距估计的尖端技术。
Depth Pro模型由Aleksei Bochkovskii, Amaël Delaunoy, Hugo Germain, Marcel Santos, Yichao Zhou, Stephan R. Richter和Vladlen Koltun在论文**Depth Pro:亚秒级精准单目测距深度模型**中提出。
本仓库提供的检查点为参考实现版本,经过重新训练。其性能接近论文报告水平,但存在细微差异。
使用指南
请按照代码仓库说明配置环境,随后可进行以下操作:
Python调用
from huggingface_hub import PyTorchModelHubMixin
from depth_pro import create_model_and_transforms, load_rgb
from depth_pro.depth_pro import (create_backbone_model, load_monodepth_weights,
DepthPro, DepthProEncoder, MultiresConvDecoder)
import depth_pro
from torchvision.transforms import Compose, Normalize, ToTensor
class DepthProWrapper(DepthPro, PyTorchModelHubMixin):
"""Depth Pro网络封装类"""
def __init__(
self,
patch_encoder_preset: str,
image_encoder_preset: str,
decoder_features: str,
fov_encoder_preset: str,
use_fov_head: bool = True,
**kwargs,
):
"""初始化Depth Pro"""
patch_encoder, patch_encoder_config = create_backbone_model(
preset=patch_encoder_preset
)
image_encoder, _ = create_backbone_model(
preset=image_encoder_preset
)
fov_encoder = None
if use_fov_head and fov_encoder_preset is not None:
fov_encoder, _ = create_backbone_model(preset=fov_encoder_preset)
dims_encoder = patch_encoder_config.encoder_feature_dims
hook_block_ids = patch_encoder_config.encoder_feature_layer_ids
encoder = DepthProEncoder(
dims_encoder=dims_encoder,
patch_encoder=patch_encoder,
image_encoder=image_encoder,
hook_block_ids=hook_block_ids,
decoder_features=decoder_features,
)
decoder = MultiresConvDecoder(
dims_encoder=[encoder.dims_encoder[0]] + list(encoder.dims_encoder),
dim_decoder=decoder_features,
)
super().__init__(
encoder=encoder,
decoder=decoder,
last_dims=(32, 1),
use_fov_head=use_fov_head,
fov_encoder=fov_encoder,
)
model = DepthProWrapper.from_pretrained("apple/DepthPro-mixin")
transform = Compose(
[
ToTensor(),
Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
]
)
model.eval()
image, _, f_px = depth_pro.load_rgb(image_path)
image = transform(image)
prediction = model.infer(image, f_px=f_px)
depth = prediction["depth"]
focallength_px = prediction["focallength_px"]
边界指标评估
边界评估指标实现于eval/boundary_metrics.py
,调用方式如下:
boundary_f1 = SI_boundary_F1(predicted_depth, target_depth)
boundary_recall = SI_boundary_Recall(predicted_depth, target_mask)
引用
若使用本研究成果,请引用以下论文:
@article{Bochkovskii2024:arxiv,
author = {Aleksei Bochkovskii and Ama\"{e}l Delaunoy and Hugo Germain and Marcel Santos and
Yichao Zhou and Stephan R. Richter and Vladlen Koltun},
title = {Depth Pro: Sharp Monocular Metric Depth in Less Than a Second},
journal = {arXiv},
year = {2024},
}
致谢
本代码库基于多项开源贡献构建,详见致谢列表。
完整参考文献及数据集列表请参阅论文原文。