DL之ResNeXt:ResNeXt算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略

DL之ResNeXt:ResNeXt算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略


相关文章
DL之ResNeXt:ResNeXt算法的简介(论文介绍)、架构详解、案例应用等配图集合之详细攻略
DL之ResNeXt:ResNeXt算法的架构详解

ResNeXt算法的简介(论文介绍)

ResNeXt算法是由Facebook研究人员提出,当时何凯明(ResNet算法作者之一)已经在Facebook工作了,

Abstract
        We present a simple, highly modularized network architecture  for image classification. Our network is constructed  by repeating a building block that aggregates a set of transformations  with the same topology. Our simple design results  in a homogeneous, multi-branch architecture that has  only a few hyper-parameters to set. This strategy exposes a  new dimension, which we call “cardinality” (the size of the  set of transformations), as an essential factor in addition to  the dimensions of depth and width. On the ImageNet-1K  dataset, we empirically show that even under the restricted  condition of maintaining complexity, increasing cardinality  is able to improve classification accuracy. Moreover, increasing  cardinality is more effective than going deeper or  wider when we increase the capacity. Our models, named  ResNeXt, are the foundations of our entry to the ILSVRC  2016 classification task in which we secured 2nd place.  We further investigate ResNeXt on an ImageNet-5K set and  the COCO detection set, also showing better results than  its ResNet counterpart. The code and models are publicly  available online .
摘要
        我们提出了一种简单、高度模块化的图像分类网络结构。我们的网络是通过重复一个构建块来构建的,这个构建块聚合了一组具有相同拓扑结构的转换。我们的简单设计了一个同质的多分支体系结构,只需要设置几个超参数。这个策略公开了一个新的维度,我们称之为“基数”(转换集的大小),它是除深度和宽度维度之外的一个基本因素。在 ImageNet-1K数据集上,我们通过经验证明,即使在保持复杂度的限制条件下,增加基数也能提高分类精度。此外,当我们增加容量时,增加基数比更深入或更广泛更有效。我们的模型名为ResNeXt,是我们进入ILSVRC 2016分类任务的基础,在该任务中我们获得了第二名。我们进一步研究了 ImageNet-5K集和 COCO检测集上的ResNet,也显示出比ResNet对应的更好的结果。代码和模型在网上公开。

论文
Saining Xie, Ross Girshick, Piotr Dollár, ZhuowenTu, and KaimingHe.
Aggregated residual transformations for deep neural networks. CVPR 2017
https://arxiv.org/abs/1611.05431

ResNeXt算法的架构详解

DL之ResNeXt:ResNeXt算法的架构详解

ResNeXt算法的案例应用

更新……

(0)

相关推荐