DL之DeepLabv1:DeepLabv1算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略

DL之DeepLabv1:DeepLabv1算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略


相关文章
DL之DeepLabv1:DeepLabv1算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略
DL之DeepLabv1:DeepLabv1算法的架构详解
DL之DeepLabv2:DeepLab v2算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略
DL之DeepLabv2:DeepLab v2算法的架构详解
DL之DeepLabv3:DeepLab v3和DeepLab v3+算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略
DL之DeepLabv3:DeepLab v3和DeepLab v3+算法的架构详解

DeepLabv1算法的简介(论文介绍)

作者意识到FCN算法模型的局限性,因此,通过改进提出了DeepLabv1算法。

ABSTRACT  
       Deep Convolutional Neural Networks (DCNNs) have recently shown state of the  art performance in high level vision tasks, such as image classification and object  detection. This work brings together methods from DCNNs and probabilistic  graphical models for addressing the task of pixel-level classification (also called  ”semantic image segmentation”). We show that responses at the final layer of  DCNNs are not sufficiently localized for accurate object segmentation. This is  due to the very invariance properties that make DCNNs good for high level tasks.  We overcome this poor localization property of deep networks by combining the  responses at the final DCNN layer with a fully connected Conditional Random  Field (CRF). Qualitatively, our “DeepLab” system is able to localize segment  boundaries at a level of accuracy which is beyond previous methods. Quantitatively,  our method sets the new state-of-art at the PASCAL VOC-2012 semantic  image segmentation task, reaching 71.6% IOU accuracy in the test set. We show  how these results can be obtained efficiently: Careful network re-purposing and a  novel application of the ’hole’ algorithm from the wavelet community allow dense  computation of neural net responses at 8 frames per second on a modern GPU.
       深度卷积神经网络(DCNNs)最近在图像分类和目标检测等高级视觉任务中表现出了最先进的性能。这项工作结合了DCNNs和概率图形模型的方法来解决像素级分类(也称为“语义图像分割”)的任务。结果表明,对于精确的目标分割,DCNNs最后一层的响应没有得到足够的局部化。这是由于非常不变性的性质,使DCNNs适合高级任务。通过将DCNN最后一层的响应与完全连接的条件随机场(CRF)相结合,克服了深度网络的这种较差的定位特性。定性地说,我们的“DeepLab”系统能够以超出以往方法的精度水平定位段边界。量化地来说,我们的方法集新技术发展水平在PASCAL VOC-2012 语义图像分割任务,测试集的准确性达到71.6%的IOU。我们展示了可有效地获得这些结果:仔细的网络重新设计和一个新的应用小波社区的“孔”算法允许在现代GPU上以每秒8帧的速度密集计算神经网络响应。
DISCUSSION  
       Our work combines ideas from deep convolutional neural networks and fully-connected conditional  random fields, yielding a novel method able to produce semantically accurate predictions and detailed  segmentation maps, while being computationally efficient. Our experimental results show that  the proposed method significantly advances the state-of-art in the challenging PASCAL VOC 2012  semantic image segmentation task.  There are multiple aspects in our model that we intend to refine, such as fully integrating its two  main components (CNN and CRF) and train the whole system in an end-to-end fashion, similar to  Krahenb ¨ uhl & Koltun (2013); Chen et al. (2014); Zheng et al. (2015). We also plan to experiment ¨  with more datasets and apply our method to other sources of data such as depth maps or videos. Recently,  we have pursued model training with weakly supervised annotations, in the form of bounding  boxes or image-level labels (Papandreou et al., 2015).  At a higher level, our work lies in the intersection of convolutional neural networks and probabilistic  graphical models. We plan to further investigate the interplay of these two powerful classes of  methods and explore their synergistic potential for solving challenging computer vision tasks.
       我们的工作结合了深卷积神经网络和全连通条件随机场的思想,提出了一种新的方法,能够产生语义准确的预测和详细的分割地图,同时计算效率高。实验结果表明,该方法显著提高了PASCAL VOC 2012语义图像分割的水平。我们的模型中有很多方面是我们想要完善的,比如充分集成其两个主要组件(CNN和CRF),以端到端的方式训练整个系统,类似于Krahenb¨uhl & Koltun (2013);Chen等(2014);郑等(2015)。我们还计划用更多的数据集进行实验,并将我们的方法应用于其他数据源,如深度地图或视频。最近,我们以边界框或图像级标签的形式,采用弱监督注解进行模型训练(Papandreou et al., 2015)。在更高层次上,我们的工作是卷积神经网络和概率图形模型的交叉。我们计划进一步研究这两种功能强大的方法之间的相互作用,并探索它们在解决具有挑战性的计算机视觉任务方面的协同潜力。

论文
Liang-ChiehChen, George Papandreou, IasonasKokkinos, Kevin Murphy, Alan L. Yuille.
Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs, ICCV, 2015.
https://arxiv.org/abs/1412.7062

0、实验结果

1、在Titan GPU 上运行速度达到了8FPS,全连接CRF 平均推断需要0.5s

2、与最先进的模型在valset的比较

Comparisons with state-of-the-art models on the valset

First row: images. 第一行:图像
Second row: ground truths. 第二行:基本真理
Third row: other recent models (Left: FCN-8s, Right: TTI-Zoomout-16).其他最新型模型(左:FCN-8s,右:TTI-Zoomout-16)
Fourth row: our DeepLab-CRF.  我们的Deeplab CRF

3、VOC 2012 VAL可视化结果

Visualization results on VOC 2012-val

For each row, we show the input image, the segmentation result delivered by the DCNN (DeepLab), and the refined segmentation result of the Fully Connected CRF (DeepLab-CRF).对于每一行,我们显示输入图像,DCNN (DeepLab)提供的分割结果,以及完全连接的CRF (DeepLab-CRF)的细化分割结果。

failure modes 失败的模型

1、FCN局限性及其改进

1、FCN局限性分析

  • 池化层可增大神经元的感受野,提高分类精度,但导致特征图分辨率降低
  • 倍率过大的上采样导致FCN的分割边界模糊

2、改进FCN

  • –仍以VGG-16为基础
  • –删去部分池化层(感受野变小)
  • –利用预训练的VGG-16在新网络上进行网络微调
  • –用带孔卷积(膨胀卷积)替换传统卷积(增大感受野,同时提升特征图的分辨率)
  • –利用全连接条件随机场提升分割边界的精度
  • –利用多尺度特征

DeepLabv1算法的架构详解

更新……

DeepLabv1算法的案例应用

更新……

(0)

相关推荐