评价者之间的一致性-Kappas Inter-rater agr

2019-10-18  本文已影响0人  Luuuuuua

评价者之间的一致性--Kappas Inter-rater agreement Kappas

inter-rater reliability == inter-rater agreement == concordance

评价者之间的一致性的Kappa分数代表着在打分判断中,他们有多少共识,有多一致。

Kappa分数处于0-1之间,具体地:

K Interpretation
<0 Poor agreement 不一致
0.0-0.20 Slight agreement
0.21-0.40 Fair agreement
0.41-0.60 Moderate agreement
0.61-0.80 Substantial agreement
0.81-1.0 Almost perfect agreement

Cohen's Kappa

Cohen's Kappa 计算了评分者之间的一致性。当评分者对同一项任务给出了相同的判断或分数,那么他们的一致性得到了体现。

Cohen’s Kappa 只能在以下的条件下使用:

Cohen's Kappa 计算

要注意的是,一般情况下,Cohen's Kappa 的计算背景是:有两个评分者对每个样本进行二分类

postive (rater A) negative (rater A) Total
postive (rater B) n_{11} n_{12} n_{1.}
negative (rater B) n_{21} n_{22} n_{2.}
Total n_{.1} n_{.2} n_{11}+n_{12}+n_{21}+n_{22}

计算公式为:
k = \frac{p_o-p_e}{1-p_e} = 1-\frac{1-p_o}{1-p_e}
其中,p_o 代表评价者之间的相对观察一致性(the relative observed agreement among raters)
p_o=\frac{n_{11}+n_{22}}{n_{11}+n_{12}+n_{21}+n_{22}}
p_e 代表偶然一致性的假设概率(the hypothetical probability of chance agreemnet
p_e=\frac{n_{.1}*n_{1.}}{(n_{11}+n_{12}+n_{21}+n_{22})^2}+\frac{n_{.2}*n_{2.}}{(n_{11}+n_{12}+n_{21}+n_{22})^2}=\frac{n_{.1}*n_{1.}+n_{.2}*n_{2.}}{(n_{11}+n_{12}+n_{21}+n_{22})^2}
例子

rater A和rater B对50张图片进行分类,正类和负类。结果为:

postive (rater A) negative (rater A) Total
postive (rater B) 20 10 30
negative (rater B) 5 15 20
Total 25 25 50

Step1 :计算p_o
p_o=number\ in\ agreement/\ total=(20+15)/50=0.70

Step2 :计算p_e
p_e=The\ total\ probability\ the\ raters\ both\ saying\ postive \\and\ negative \ randomly =(25/50)*(30/50)+(25/50)*(20/50)=0.50
Step3 :计算k
k=\frac{p_o-p_e}{1-p_e}=\frac{0.70-0.50}{1-0.50}=0.40
k=0.40 代表fair agreement

Fleiss's Kappa

Fleiss's Kappa 是对 Cohen‘s Kappa 的扩展:

举一个例子对 Fleiss's Kappa 的计算进行说明:14个评价者对10个项目进行1-5的评分,N=10,n=14,k=5

n_{ij} 1 2 3 4 5 P_i
1 0 0 0 0 14 1.000
2 0 2 6 4 2 0.253
3 0 0 3 5 6 0.308
4 0 3 9 2 0 0.440
5 2 2 8 1 1 0.330
6 7 7 0 0 0 0.462
7 3 2 6 3 0 0.242
8 2 5 3 2 2 0.176
9 6 5 2 1 0 0.286
10 0 2 2 3 7 0.286
Total 20 28 39 21 32 140
p_j 0.143 0.200 0.279 0.150 0.229

Step1 :计算p_j ,以p_1为例,评价者随机打1分的概率
p_1=the\ total\ number\ of\ the\ column/\ the\ total\ number\ of \ tasks = 20/14*10=0.143
Step2 :计算P_i ,以P_2为例,14个评价者对第2个任务达成共识的程度
P_2=\frac{the\ sum\ of\ suqare \ of\ the\ row}{n*(n-1)}=\frac{0^2+2^2+6^2+4^2-14}{14*(14-1)}=0.253
Step3 :计算P_e,P_o
P_o=\frac{1}{N}\sum_{i=1}^{N}P_i=\frac{1}{10}*3.78=0.378

P_e=\sum_{j=1}^{k}p_j^2=0.143^2+0.200^2+0.279^2+0.150^2+0.229^2=0.213

k=\frac{P_o-P_e}{1-P_e}=\frac{0.378-0.213}{1-0.213}=0.210

k=0.210 代表fair agreement

[1] Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–74

[2] http://www.pmean.com/definitions/kappa.htm

[3] https://www.statisticshowto.datasciencecentral.com/cohens-kappa-statistic/

[4] https://www.statisticshowto.datasciencecentral.com/fleiss-kappa/

[5] [https://github.com/amirziai/learning/blob/master/statistics/Inter-rater%20agreement%20kappas.ipynb](https://github.com/amirziai/learning/blob/master/statistics/Inter-rater agreement kappas.ipynb)

[6] https://blog.csdn.net/qq_31113079/article/details/76216611

上一篇下一篇

猜你喜欢

热点阅读