In a study, I am measuring the Intersection of Union and mean average precision over different IOU thresholds for a given deep learning classification model, based on the ground truth annotations provided by an expert radiologist.

Similarly, I measure the Intersection of Union and mean average precision values with respect to the ground truth annotations provided by another expert radiologist. In this regard, I have mAP1, IOU1 and mAP2, IOU2.

Is there a parameter that uses mAP1 and mAP2 and/or IOU1 and IOU2 and gives a measure of inter-reader variability?

Both readers annotated 100 images and the multiple bounding box annotations for these images are available in terms of JSON files. Can these JSON files be used to arrive at a parameter to measure inter-reader variability?

More Sivaramakrishnan Rajaraman's questions See All
Similar questions and discussions