PyTorch Convolution Layers
2019-05-11 本文已影响0人
DejavuMoments
卷积核的作用是什么?
1x1 卷积核在 Network in Network 中被提出了,主要作用有:
1.压缩/提升 维度
2.相当于全联接网络,经过 ReLU 层,可以增加非线性
为什么要进行 Padding 操作?
1.解决多次卷积之后,Feature Map 尺寸缩小的问题
2.边缘信息丢失(卷积核 扫)
Padding 一般有两种选择:Valid 和 Same
ResNet
Inception
Conv1d
torch.nn.Conv1d(
in_channels,
out_channels,
kernel_size,
stride=1,
padding=0,
dilation=1,
groups=1,
bias=True,
padding_mode='zeros'
)
Applies a 1D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size
where is the valid cross-correlation operator, is a batch size, denotes a number of channels, is a length of signal sequence.
这里 32 为batch_size,50 为句子最大长度,256 为词向量
再输入一维卷积的时候,需要将 变换为 ,因为一维卷积是在最后维度上扫的,最后 out 的大小即为:
kernel_size
stride 步长
Conv2d
torch.nn.Conv2d(
in_channels,
out_channels,
kernel_size,
stride=1,
padding=0,
dilation=1,
groups=1,
bias=True,
padding_mode='zeros'
)
Applies a 2D convolution over an input signal composed of several input planes.
AdaptiveMaxPool1d
Applies a 1D adaptive max pooling over an input signal composed of several input planes.