目标检测中的mAP(mean average precision

2020-09-09  本文已影响0人  wzNote

在衡量目标检测器的精确程度时,常用的metric是AP(Average precision)。AP计算的是recall在0-1之间的平均precision。在解释AP之前,需要先了解precision, recall 和 IoU的概念。

Precision & Recall

Rank Correct? Precision Recall
1 True 1.0 0.2
2 True 1.0 0.4
3 False 0.67 0.4
4 False 0.5 0.4
5 False 0.4 0.4
6 True 0.5 0.6
7 True 0.57 0.8
8 False 0.5 08
9 False 0.44 0.8
10 True 0.5 1.0

Precision 用于衡量预测的准确性,是预测正确的百分比

Recall用于衡量预测出positives的能力

数学定义如下:
Precision = \frac{TP}{TP+FP}
Recall = \frac{TP}{TP+FN}
其中:
TP = True positive,TN = True negative,FP = False positive,FN = False negative

IoU(Intersection over union)

IoU define.png

IoU测量的是两个bbox的重叠程度,在目标检测领域,被用于计算predicted bbox 和ground truth bbox有多重叠,IoU的取值范围是[0,1],取值越大说明重叠的越多,即预测的越准确。

ground truth bbox(blue), predicted bbox (red)

COCO mAP
在COCO竞赛中,AP是涵盖10个等级IoU(从0.5到0.95,步长为0.05)的80类预测精度的平均。

YOLO v3中AP的结果如下图所示


其中,AP _{50} 是IoU=0.50时的AP
在COCO中AP和mAP没有区别:

AP is averaged over all categories. Traditionally, this is called “mean average precision” (mAP). We make no distinction between AP and mAP (and likewise AR and mAR) and assume the difference is clear from context.

Reference

[1]https://towardsdatascience.com/implementation-of-mean-average-precision-map-with-non-maximum-suppression-f9311eb92522
[2]https://medium.com/@timothycarlen/understanding-the-map-evaluation-metric-for-object-detection-a07fe6962cf3
[3]https://medium.com/@jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173
[4]https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html

上一篇 下一篇

猜你喜欢

热点阅读