loss函数之MarginRankingLoss
2021-07-17 本文已影响0人
ltochange
MarginRankingLoss
排序损失函数
对于包含个样本的batch数据
,
,
是给定的待排序的两个输入,
代表真实的标签,属于
。当
是,
应该排在
之前,
是,
应该排在
之后。第
个样本对应的
计算如下:
若,
排序正确且
, 则loss为0;其他情况下是loss为
class MarginRankingLoss(_Loss):
__constants__ = ['margin', 'reduction']
def __init__(self, margin=0., size_average=None, reduce=None, reduction='mean'):
super(MarginRankingLoss, self).__init__(size_average, reduce, reduction)
self.margin = margin
def forward(self, input1, input2, target):
return F.margin_ranking_loss(input1, input2, target, margin=self.margin, reduction=self.reduction)
pytorch中通过torch.nn.MarginRankingLoss
类实现,也可以直接调用F.margin_ranking_loss
函数,代码中的size_average
与reduce
已经弃用。reduction有三种取值mean
, sum
, none
,对应不同的返回。 默认为
mean
,对应于上述的计算
默认取值0
例子:
import torch
import torch.nn.functional as F
import torch.nn as nn
import math
def validate_MarginRankingLoss(input1, input2, target, margin):
val = 0
for x1, x2, y in zip(input1, input2, target):
loss_val = max(0, -y * (x1 - x2) + margin)
val += loss_val
return val / input1.nelement()
torch.manual_seed(10)
margin = 0
loss = nn.MarginRankingLoss()
input1 = torch.randn([3], requires_grad=True)
input2 = torch.randn([3], requires_grad=True)
target = torch.tensor([1, -1, -1])
print(target)
output = loss(input1, input2, target)
print(output.item())
output = validate_MarginRankingLoss(input1, input2, target, margin)
print(output.item())
loss = nn.MarginRankingLoss(reduction="none")
output = loss(input1, input2, target)
print(output)
输出:
tensor([ 1, -1, -1])
0.015400052070617676
0.015400052070617676
tensor([0.0000, 0.0000, 0.0462], grad_fn=<ClampMinBackward>)