DL之DeepLabv3:DeepLab v3和DeepLab v3+算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略
DL之DeepLabv3:DeepLab v3和DeepLab v3+算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略
相关文章
DL之DeepLabv1:DeepLabv1算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略
DL之DeepLabv1:DeepLabv1算法的架构详解
DL之DeepLabv2:DeepLab v2算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略
DL之DeepLabv2:DeepLab v2算法的架构详解
DL之DeepLabv3:DeepLab v3和DeepLab v3+算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略
DL之DeepLabv3:DeepLab v3和DeepLab v3+算法的架构详解
DeepLab v3和DeepLab v3+算法的简介(论文介绍)
DeepLab v3
Abstract
In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed 'DeepLabv3’ system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.
在本工作中,我们重新讨论了atrous convolution,这是一个强大的工具,可以显式调整滤波器的视野,并控制深度卷积神经网络计算的特征响应的分辨率,这是在语义图像分割中的应用。针对多尺度目标分割问题,设计了采用级联或并行的无级卷积模块,采用多尺度速率捕获多尺度上下文。此外,我们建议增加先前提出的Atrous空间金字塔池模块,该模块在多个尺度上探测卷积特征,并使用图像级特征编码全局上下文,进一步提高性能。我们也详细阐述了系统的实施细节,并分享了我们在训练系统方面的经验。提出的“DeepLabv3”系统在没有经过DenseCRF后处理的情况下,大大改进了我们之前的DeepLab版本,并在PASCAL VOC 2012语义图像分割基准上取得了与其他先进模型相当的性能。
Conclusion
Our proposed model “DeepLabv3” employs atrous convolution with upsampled filters to extract dense feature maps and to capture long range context. Specifically, to encode multi-scale information, our proposed cascaded module gradually doubles the atrous rates while our proposed atrous spatial pyramid pooling module augmented with image-level features probes the features with filters at multiple sampling rates and effective field-of-views. Our experimental results show that the proposed model significantly improves over previous DeepLab versions and achieves comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.
我们提出的“DeepLabv3”模型利用上采样滤波器的卷积来提取密集的特征图,并捕获长范围的上下文。具体来说,为了对多尺度信息进行编码,我们提出的级联模块逐步将atrous速率提高一倍,而我们提出的atrous空间金字塔池模块使用图像级特征增强,探测具有多个采样速率和有效视场的过滤器的特征。实验结果表明,该模型较之前的DeepLab版本有了明显的改进,并在PASCAL VOC 2012语义图像分割基准上取得了与其他现有模型相当的性能。
论文
Liang-ChiehChen, George Papandreou, Florian Schroff, HartwigAdam.
Rethinking AtrousConvolution for Semantic Image Segmentation. CVPR, 2017
https://arxiv.org/abs/1706.05587
DeepLab v3+
Abstract
Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on the PASCAL VOC 2012 semantic image segmentation dataset and achieve a performance of 89% on the test set without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow.
深度神经网络采用空间金字塔汇聚模块或编码解码器结构进行语义分割。前者通过滤光器探测输入特征或以多种速率和多个有效视场汇聚操作来编码多尺度上下文信息,后者通过逐步恢复空间信息来捕捉更清晰的对象边界。在这项工作中,我们建议结合这两种方法的优点。具体来说,我们提出的模型DeepLabv3+扩展了DeepLabv3,添加了一个简单而有效的解码器模块来细化分割结果,尤其是沿着对象边界的分割结果。我们进一步探讨了Xception模型,并将深度可分离卷积应用于无源空间金字塔池和解码器模块中,得到了一个更快、更强的编解码器网络。我们在PASCAL VOC 2012语义图像分割数据集上验证了该模型的有效性,在没有任何后处理的情况下,测试集的性能达到89%。我们的论文附带了Tensorflow中提出的模型的公开参考实现。
Conclusion
Our proposed model “DeepLabv3+” employs the encoderdecoder structure where DeepLabv3 is used to encode the rich contextual information and a simple yet effective decoder module is adopted to recover the object boundaries. One could also apply the atrous convolution to extract the encoder features at an arbitrary resolution, depending on the available computation resources. We also explore the Xception model and atrous separable convolution to make the proposed model faster and stronger. Finally, our experimental results show that the proposed model sets a new state-of-the-art performance on the PASCAL VOC 2012 semantic image segmentation benchmark.
我们提出的模型“DeepLabv3+”采用了encoderdecoder结构,其中DeepLabv3用于编码丰富的上下文信息,并采用一个简单而有效的解码器模块来恢复对象边界。根据可用的计算资源,还可以应用无源卷积以任意分辨率提取编码器的特性。同时,我们还研究了Xception模型和atrous可分离卷积,使所提出的模型更快、更强。最后,我们的实验结果表明,该模型在PASCAL VOC 2012语义图像分割基准上设置了一个新的最先进的性能。
论文
Liang-ChiehChen, YukunZhu, George Papandreou, Florian Schroff, Hartwig Adam.
Encoder-Decoder with AtrousSeparable Convolution for Semantic Image Segmentation. Feb. 2018
https://arxiv.org/abs/1802.02611v1
0、实验结果
1、Performance on PASCAL VOC 2012 test
DeepLab v3 | DeepLab v3+ |
2、 DeepLabv3+算法PASCAL VOC 2012
Visualization results on the PASCAL VOC 2012 valset
DeepLab v3算法的架构详解
更新……
DL之DeepLabv3:DeepLab v3和DeepLab v3+算法的架构详解
DeepLab v3算法的案例应用
更新……