您的位置:首页 > 其它

READING NOTE: Wider or Deeper: Revisiting the ResNet Model for Visual Recognition

2016-12-07 20:17 936 查看
TITLE: Wider or Deeper: Revisiting the ResNet Model for Visual Recognition

AUTHOR: Zifeng Wu, Chunhua Shen, Anton van den Hengel

ASSOCIATION: The University of Adelaide

FROM: arXiv:1611.10080

CONTRIBUTIONS

A further developed intuitive view of ResNets is introduced, which helps to understand their behaviours and find possible directions to further improvements.

A group of relatively shallow convolutional networks is proposed based on our new understanding. Some of them achieve the state-of-the-art results on the ImageNet classification datase.

The impact of using different networks is evaluated on the performance of semantic image segmentation, and these networks, as pre-trained features, can boost existing algorithms a lot.

SUMMARY



For the residual unit i, let yi−1 be the input, and let fi(⋅) be its trainable non-linear mappings, also named Blok i. The output of unit i is recursively defined as

yi=fi(yi−1,ωi)+yi−1

where ωi denotes the trainalbe parameters, and fi(⋅) is often two or three stacked convolution stages in a ResNet building block. Then top left network can be formulated as

y2=y1+f2(y1,ω2)

=y0+f1(y0,ω1)+f2(y0+f1(y0,ω1),ω2)

Thus, in SGD iteration, the backward gradients are:

Δω2=dfsdω2⋅Δy2

Δy1=Δy2+f′2⋅Δy2

Δω1=df1dω1⋅Δy2+df1dω1⋅f′2⋅Δy2

Ideally, when effective depth l≥2, both terms of Δω1 are non-zeros as the bottom-left case illustrated. However, when effective depth l=1, the second term goes to zeros, which is illustrated by the bottom-right case. If this case happens, we say that the ResNet is over-deepened, and that it cannot be trained in a fully end-to-end manner, even with those shortcut connections.

To summarize, shortcut connections enable us to train wider and deeper networks. As they growing to some point, we will face the dilemma between width and depth. From that point, going deep, we will actually get a wider network, with extra features which are not completely end-to-end trained; going wider, we will literally get a wider network, without changing its end-to-end characteristic.

The author designed three kinds of network structure as illustrated in the following figure



and the classification performance on ImageNet validation set is shown as below

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐