2022

loss函数之BCELoss

2021-06-11  本文已影响0人  ltochange

BCELoss

二分类交叉熵损失

单标签二分类

一个输入样本对应于一个分类输出,例如,情感分类中的正向和负向

对于包含N个样本的batch数据 D(x, y)loss计算如下:

loss=\frac{1}{N} \sum_{n=1}^{N} l_{n}

其中,l_{n}=-w\left[y_{n} \cdot \log x_{n}+\left(1-y_{n}\right) \cdot \log \left(1-x_{n}\right)\right] 为第n个样本对应的loss

w是超参数,对于单标签二分类,设不设置w, 没有影响

一般情况下,y中各个元素的取值为0或1,代表真实类别

class BCELoss(_WeightedLoss):
    __constants__ = ['reduction', 'weight']
    def __init__(self, weight=None, size_average=None, reduce=None, reduction='mean'):
        super(BCELoss, self).__init__(weight, size_average, reduce, reduction)
    def forward(self, input, target):
        return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)

pytorch中通过torch.nn.BCELoss类实现,也可以直接调用F.binary_cross_entropy 函数,代码中的weight即是wsize_averagereduce已经弃用。reduction有三种取值mean, sum, none,对应不同的返回\ell(x, y). 默认为mean

L=\left\{l_{1}, \ldots, l_{N}\right\}

\ell(x, y)=\left\{\begin{array}{ll} L, & \text { if reduction }=\text { 'none' } \\ {mean}(L), & \text { if reduction }=\text { 'mean' } \\ {sum}(L), & \text { if reduction }=\text { 'sum' }\end{array} \right.

其中,当reduction取值mean时,对应于上述loss的计算

验证函数:证明loss函数的输出和我们理解的输出一致

def validate_loss(output, target, weight=None, pos_weight=None):
    # 处理正负样本不均衡问题
    if pos_weight is None:
        label_size = output.size()[1]
        pos_weight = torch.ones(label_size)
    # 处理多标签不平衡问题
    if weight is None:
        label_size = output.size()[1]
        weight = torch.ones(label_size)

    val = 0
    for li_x, li_y in zip(output, target):
        for i, xy in enumerate(zip(li_x, li_y)):
            x, y = xy
            loss_val = pos_weight[i] * y * math.log(x, math.e) + (1 - y) * math.log(1 - x, math.e)
            val += weight[i] * loss_val
    return -val / (output.size()[0] * output.size(1))

使用torch.nn.BCELoss类实现loss计算

import torch
import torch.nn.functional as F
import torch.nn as nn
import math

# 单标签二分类
m = nn.Sigmoid()
weight = torch.tensor([0.8])
loss_fct = nn.BCELoss(reduction="mean", weight=weight)
input_src = torch.Tensor([[0.8], [0.9], [0.3]])
target = torch.Tensor([[1], [1], [0]])
print(input_src.size())
print(target.size())
output = m(input_src)
loss = loss_fct(output, target)
print(loss.item())

# 验证计算
validate = validate_loss(output, target, weight)
print(validate.item())
# 输出
torch.Size([3, 1])
torch.Size([3, 1])
0.4177626073360443
0.4177626371383667

使用binary_cross_entropy函数实现loss计算

# 单标签二分类
weight = torch.tensor([0.8])
input_src = torch.Tensor([[0.8], [0.9], [0.3]])
target = torch.Tensor([[1], [1], [0]])
print(input_src.size())
print(target.size())
output = torch.sigmoid(input_src)
loss = F.binary_cross_entropy(output, target, weight=weight, reduction='mean')
print(loss.item())

# 验证计算
validate = validate_loss(output, target, weight)
print(validate.item())
torch.Size([3, 1])
torch.Size([3, 1])
0.4177626073360443
0.4177626371383667

多标签二分类

什么是多标签分类?比如说给一篇文章分配话题,它既可以是科技类又可以是教育类,科技和教育就是这篇文章的两个标签。又比如判断一幅图中包含什么,它可能既包含房子又包含马路。房子和马路就是这幅图对应的两个标签。将每一种标签,看作是二分类。一个输入样本对应于多个标签,每个标签对应一个二分类(是或不是)。

loss 的计算方式和上面一致,对于包含N个样本的batch数据 D(x, y),每个样本可能有M个标签,loss计算如下:
loss=\frac{1}{N} \sum_{n=1}^{N} l_{n}

其中,l_{n}= \frac{1}{M} \sum_{i=1}^{M}l_{n}^{i} 为第n个样本对应的loss

l_{n}^{i}= - w_{i}\left[y_{n}^{i} \cdot \log x_{n}^{i}+\left(1-y_{n}^{i}\right) \cdot \log \left(1-x_{n}^{i}\right)\right]

w_{i}是超参数, 用于处理标签间的样本不均衡问题。对于一批训练集,若其中某个标签的出现次数较少,计算loss时应该给予更高的权重。

L=\left(\begin{array}{c}l_{1}^{1},l_{1}^{2},...,l_{1}^{M} \\ l_{2}^{1},l_{2}^{2},...,l_{2}^{M} \\ \vdots \\ l_{N}^{1},l_{N}^{2},...,l_{N}^{M} \end{array}\right)

\ell(x, y)=\left\{\begin{array}{ll}\operatorname L, & \text { if reduction }=\text { 'none' } \\ \operatorname{mean}(L), & \text { if reduction }=\text { 'mean' } \\ \operatorname{sum}(L), & \text { if reduction }=\text { 'sum' }\end{array} \right.

多标签二分类例子:

import torch
import torch.nn.functional as F
import torch.nn as nn
import math
weight = torch.Tensor([0.8, 1, 0.8])
input = torch.Tensor([[0.8, 0.9, 0.3], [0.8, 0.9, 0.3], [0.8, 0.9, 0.3], [0.8, 0.9, 0.3]])
target = torch.Tensor([[1, 1, 0], [1, 1, 0], [1, 1, 0], [1, 1, 0]])
print(input.size())
print(target.size())
output = torch.sigmoid(input)
loss = F.binary_cross_entropy(output, target, reduction='none', weight=weight)
print(loss)  # none

loss = F.binary_cross_entropy(output, target, reduction='mean', weight=weight)
print(loss.item())
# 验证计算
validate = validate_loss(output, target, weight)
print(validate.item())
torch.Size([4, 3])
torch.Size([4, 3])
tensor([[0.2969, 0.3412, 0.6835],
        [0.2969, 0.3412, 0.6835],
        [0.2969, 0.3412, 0.6835],
        [0.2969, 0.3412, 0.6835]])
0.4405061900615692
0.4405062198638916

BCEWithLogitsLoss

将Sigmoid层和BCELoss类合并在一个类中.

pytorch中通过torch.nn.BCEWithLogitsLoss类实现, 也可以直接调用函数F.binary_cross_entropy_with_logits

class BCEWithLogitsLoss(_Loss):
    def __init__(self, weight: Optional[Tensor] = None, size_average=None, reduce=None, reduction: str = 'mean',
                 pos_weight: Optional[Tensor] = None) -> None:
        super(BCEWithLogitsLoss, self).__init__(size_average, reduce, reduction)
        self.register_buffer('weight', weight)
        self.register_buffer('pos_weight', pos_weight)

    def forward(self, input: Tensor, target: Tensor) -> Tensor:
        assert self.weight is None or isinstance(self.weight, Tensor)
        assert self.pos_weight is None or isinstance(self.pos_weight, Tensor)
        return F.binary_cross_entropy_with_logits(input, target,
                                                  self.weight,
                                                  pos_weight=self.pos_weight,
                                                  reduction=self.reduction)

BCEWithLogitsLoss = Sigmoid + BCELoss 例子:

m = nn.Sigmoid()
weight = torch.tensor([0.8])
loss_fct = nn.BCELoss(reduction="mean", weight=weight)
loss_fct_logit = nn.BCEWithLogitsLoss(reduction="mean", weight=weight)
input_src = torch.Tensor([0.8, 0.9, 0.3])
target = torch.Tensor([1, 1, 0])
print(input_src)
print(target)
output = m(input_src)
loss = loss_fct(output, target)
loss_logit = loss_fct_logit(input_src, target)
print(loss.item())
print(loss_logit.item())
# 结果一致
tensor([0.8000, 0.9000, 0.3000])
tensor([1., 1., 0.])
0.4177626371383667
0.4177626371383667

值得注意的是,BCEWithLogitsLoss类比BCELoss类,还多了一个pos_weight参数,pos_weight用于处理每种标签中正负样本不均衡问题,作为正类的权重。具体的,若正类样本较多,设置pos_weight<1,若负类样本较多,设置pos_weight>1

input = torch.Tensor([[0.8, 0.9, 0.3], [0.8, 0.9, 0.3], [0.8, 0.9, 0.3], [0.8, 0.9, 0.3]])
target = torch.Tensor([[1, 1, 0], [1, 1, 0], [1, 1, 0], [1, 1, 0]])
print(input.size())
print(target.size())
output = torch.sigmoid(input)
weight = torch.tensor([0.8, 1, 0.8])
loss = F.binary_cross_entropy_with_logits(input, target, reduction='mean', pos_weight=weight)
print(loss.item())
# 验证计算
validate = validate_loss(output, target, pos_weight=weight)
print(validate.item())
torch.Size([4, 3])
torch.Size([4, 3])
0.49746325612068176
0.49746325612068176
上一篇下一篇

猜你喜欢

热点阅读