Inception: v1, v2, v3, v4

来自:Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning

The Inception deep convolutional architecture was introduced in [14] and was called GoogLeNet or Inception-v1 in our exposition. Later the Inception architecture was refined in various ways, first by the introduction of batch normalization [6] (Inception-v2) by Ioffe et al. Later the architecture was improved by additional factorization ideas in the third iteration [15] which will be referred to as Inception-v3 in this report.


Going deeper with convolutions:https://arxiv.org/pdf/1409.4842.pdf
Batch Normalization: https://arxiv.org/pdf/1502.03167.pdf
Rethinking the Inception Architecture for Computer Vision:https://arxiv.org/pdf/1512.00567.pdf

Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning: https://arxiv.org/pdf/1602.07261.pdf


代码:tensorflow GoogleNet inception V1 V2 V3 V4

https://github.com/PanJinquan/tensorflow_models_learning

Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning

Abstract


Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08% top-5 error on the test set of the ImageNet classification (CLS) challenge.


翻译:最近几年,极深度巻积神经网络在图像识别上的进展已经处于中心地位。其中的一个例子是 Inception 架构,它兼具良好表现以及较低的计算开销。


这里我们给出清晰的实验证明:residual connections 能显著加速 inception networks 的训练过程。

深度学习推荐
深度学习推荐

墨之科技,版权所有 © Copyright 2017-2027

湘ICP备14012786号     邮箱:ai@inksci.com