BCELoss和BCEWithLogitsLoss

2021-10-06  本文已影响0人  三方斜阳

计算多标签分类时候的损失函数一般选择BCELoss和BCEWithLogitsLoss,这两者的区别在于:

  1. 准备输入input:
import torch
import torch.nn as nn
input = torch.tensor([[-0.4089,-1.2471,0.5907],
                      [-0.4897,-0.8267,-0.7349],
                      [0.5241,-0.1246,-0.4751]])
print(input)
tensor([[-0.4089, -1.2471,  0.5907],
        [-0.4897, -0.8267, -0.7349],
        [ 0.5241, -0.1246, -0.4751]])
  1. sigmoid 将输出值约束到0-1之间:
m=nn.Sigmoid()
S_input=m(input)
print(S_input)
tensor([[0.3992, 0.2232, 0.6435],
        [0.3800, 0.3043, 0.3241],
        [0.6281, 0.4689, 0.3834]])
  1. 准备目标值target:
target=torch.FloatTensor([[0,1,1],[0,0,1],[1,0,1]])
print(target)
tensor([[0., 1., 1.],
        [0., 0., 1.],
        [1., 0., 1.]])
  1. 接着使用BCELoss计算损失值:
BCELoss=nn.BCELoss()
loss=BCELoss(S_input,target)
print(loss)
tensor(0.7193)
  1. 如下图看BCELoss如何计算多标签分类的损失,验证计算结果一致:


loss = 0.0
for i in range(S_input.shape[0]):
    for j in range(S_input.shape[1]):
        loss += -(target[i][j] * torch.log(S_input[i][j]) + (1 - target[i][j]) * torch.log(1 - S_input[i][j]))
print(loss/(S_input.shape[0]*S_input.shape[1])) # 默认取均值
tensor(0.7193)
  1. BCEWithLogitsLoss 就是把求Sigmoid 和上图的取log等计算loss合到一起:
BCEWithLogitsLoss=nn.BCEWithLogitsLoss()
loss=BCEWithLogitsLoss(input,target)
print(loss)
tensor(0.7193)
上一篇下一篇

猜你喜欢

热点阅读