2021-05-20bert学习

2021-05-21  本文已影响0人  Cipolee

kggle使用tensorflow的代码,铜牌
该代码编码(tokenizier)使用预训练,bert不使用预训练

dropout并非一定会提点,而是提供一种让部分神经元失活的防治过拟合方法,相当于降低模型拟合力换取泛化能力的方法
关于dropout的知乎专栏

self.transformer = BertModel(bert_config)

        self.nb_features = self.transformer.pooler.dense.out_features

        self.pooler = nn.Sequential(
            nn.Linear(self.nb_features, self.nb_features), 
            nn.Tanh(),
        )

        self.logit = nn.Linear(self.nb_features, num_classes)

我选择的是使用tokenizier+bert+dropout+Linear

关于init.norm_可以加速收敛

Fills the input Tensor with values drawn from the normal distribution N(mean,std^2)
使用std==0.02的代码源1
使用std==0.02的代码源2

  使用方法 torch.nn.init.uniform_(tensor, a=0.0, b=1.0)
使用bert的解码器输出的隐含层特征

原因:Take the first hidden-state from BERT output (corresponding to CLS token) and feed it into a Dense layer with 6 neurons and sigmoid activation (Classifier). The outputs of this layer can be interpreted as probabilities for each of the 6 classes.

bert使用CLS的原因防止和某个位置的关系过大(self-attention计算原理),应该与整个句子的特征有关,不应该对某个位置关系过大。

函数的参数一定是前面没有值进行初始化,后面的值进行初始化了的

一些错误

ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.

This seems also related: spacy-transformers

This was a data issue. I removed all non-alphanumeric data from my examples and managed to train

error:Input, output and indices must be on the current device

交叉熵的输入

使用size_average的错误

UserWarning: size_average and reduce args will be deprecated, please use reduction='mean' instead.
warnings.warn(warning.format(ret))

使用sequenceclassifier的输出

A SequenceClassifierOutput (if return_dict=True is passed or when config.return_dict=True) or a tuple of torch.FloatTensor comprising various elements depending on the configuration (BertConfig) and inputs.

image.png
上一篇下一篇

猜你喜欢

热点阅读