loss函数之KLDivLoss
2021-07-18 本文已影响0人
ltochange
KL散度
KL散度,又叫相对熵,用于衡量两个分布(离散分布和连续分布)之间的距离。
设 、 是离散随机变量的两个概率分布,则 对 的KL散度是:
KLDivLoss
对于包含个样本的batch数据 ,是神经网络的输出,并且进行了归一化和对数化;是真实的标签(默认为概率),与同维度。
第个样本的损失值计算如下:
class KLDivLoss(_Loss):
__constants__ = ['reduction']
def __init__(self, size_average=None, reduce=None, reduction='mean'):
super(KLDivLoss, self).__init__(size_average, reduce, reduction)
def forward(self, input, target):
return F.kl_div(input, target, reduction=self.reduction)
pytorch中通过torch.nn.KLDivLoss
类实现,也可以直接调用F.kl_div
函数,代码中的size_average
与reduce
已经弃用。reduction有四种取值mean
,batchmean
, sum
, none
,对应不同的返回。 默认为mean
例子:
import torch
import torch.nn as nn
import math
def validate_loss(output, target):
val = 0
for li_x, li_y in zip(output, target):
for i, xy in enumerate(zip(li_x, li_y)):
x, y = xy
loss_val = y * (math.log(y, math.e) - x)
val += loss_val
return val / output.nelement()
torch.manual_seed(20)
loss = nn.KLDivLoss()
input = torch.Tensor([[-2, -6, -8], [-7, -1, -2], [-1, -9, -2.3], [-1.9, -2.8, -5.4]])
target = torch.Tensor([[0.8, 0.1, 0.1], [0.1, 0.7, 0.2], [0.5, 0.2, 0.3], [0.4, 0.3, 0.3]])
output = loss(input, target)
print("default loss:", output)
output = validate_loss(input, target)
print("validate loss:", output)
loss = nn.KLDivLoss(reduction="batchmean")
output = loss(input, target)
print("batchmean loss:", output)
loss = nn.KLDivLoss(reduction="mean")
output = loss(input, target)
print("mean loss:", output)
loss = nn.KLDivLoss(reduction="none")
output = loss(input, target)
print("none loss:", output)
输出:
default loss: tensor(0.6209)
validate loss: tensor(0.6209)
batchmean loss: tensor(1.8626)
mean loss: tensor(0.6209)
none loss: tensor([[1.4215, 0.3697, 0.5697],
[0.4697, 0.4503, 0.0781],
[0.1534, 1.4781, 0.3288],
[0.3935, 0.4788, 1.2588]])