senet论文笔记

2019-02-15  本文已影响0人  2018燮2021

参考文章:

解读Squeeze-and-Excitation Networks(SENet)
SENet学习笔记
Squeeze-and-Excitation Networks论文翻译——中英文对照

Abstract—The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the “Squeeze-and-Excitation” (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at minimal additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ∼25%.

关键词 spatial,channel-wise,receptive fields

关键词 the spatial component(spatial encodings),representation,feature hierarchy

关键词 channel relationship,adaptively recalibrates,interdependencies between channels

INTRODUCTION

1、 At each convolutional layer in the network, a collection of filters expresses neighbourhood spatial connectivity patterns along input channels—fusing spatial and channel-wise information together within local receptive fields. By interleaving a series of convolutional layers with non-linear activation functions and downsampling operators, CNNs are able to produce robust representations that capture hierarchical patterns and attain global theoretical receptive fields.

2、Recent research has shown that these representations can be strengthened by integrating learning mechanisms into the network that help capture spatial correlations between features. One such approach, popularised by the Inception family of architectures [5], [6], incorporates multi-scale processes into network modules to achieve improved performance.

3、we propose a mechanism that allows the network to perform feature recalibration, through which it can learn to use global information to selectively emphasise informative features and suppress less useful ones.

The structure of the SE building block

关键词 feature recalibration,squeeze operation(channel descriptor),aggregation,excitation operation,a simple self-gating mechanism

1、The features U are first passed through a squeeze operation, which produces a channel descriptor by aggregating feature maps across their spatial dimensions (H × W). The function of this descriptor is to produce an embedding of the global distribution of channel-wise feature responses, allowing information from the global receptive field of the network to be used by all its layers.

2、The aggregation is followed by an excitation operation, which takes the form of a simple self-gating mechanism that takes the embedding as input and produces a collection of per-channel modulation weights. These weights are applied to the feature maps U to generate the output of the SE block which can be fed directly into subsequent layers of the network.

关键词 class-agnostic manner,class-specific manner

在前面的层中,它学习以类不可知的方式激发信息特征,增强共享的较低层表示的质量。在后面的层中,SE块越来越专业化,并以高度类特定的方式响应不同的输入。因此,SE块进行特征重新校准的好处可以通过整个网络进行累积。

Deeper architectures

关键词 learning ,representation

An alternative, but closely related line of research has focused on methods to improve the functional form of the computational elements contained within a network.

In prior work, cross-channel correlations are typically mapped as new combinations of features, either independently of spatial structure [22], [23] or jointly by using standard convolutional filters [24] with 1 × 1 convolutions.
多分支卷积可以解释为这个概念的概括,使得卷积算子可以更灵活的组合[14,38,39,40]。跨通道相关性通常被映射为新的特征组合,或者独立的空间结构[6,18],或者联合使用标准卷积滤波器[22]和1×1卷积

上一篇下一篇

猜你喜欢

热点阅读