nlp - Using kappa coefficient to evaluate results of crowd sourcing -


i have 4 sets of manually tagged data 0 , 1, 4 different people. have final labelled data in terms of 0 , 1 using 4 sets of manually tagged data. have calculated degree of agreement between users a-b : 0.3276, a-c : 0.3263, a-d : 0.4917, b-c : 0.2896, b-d : 0.4052, c-d : 0.3540.

i not know how use calculate final data single set. please help.

the kappa coefficient works pair of annotators. more two, need employ extension of it. 1 popular way of doing use this expansion proposed richard light in 1971, or use average expected agreement annotator pairs, proposed davies , fleiss in 1982. not aware of readily available calculator compute these you, may have implement code yourself.

there this wikipedia page on fleiss' kappa, however, might find helpful.

these techniques can used nominal variables. if data not on nominal scale, use different measure intraclass correlation coefficient.


Comments

Popular posts from this blog

javascript - Using jquery append to add option values into a select element not working -

Android soft keyboard reverts to default keyboard on orientation change -

jquery - javascript onscroll fade same class but with different div -