Intra- and inter-observer agreement is a critical issue in imaging, and this must be evaluated with the most appropriate test. The correlation coefficient in the class should be used to assess compliance with qualitative variables. Keywords: reproducibility of results, interobserver agreement, radiology, kappa test, intraclass correlation coefficient Agreement between observers (i.e. inter-evaluator agreement) can be quantified with different criteria, but their appropriate selection is crucial. If the measure is qualitative (nominal or ordinal), the proportion of agreement or kappa coefficient should be used to assess consistency between evaluators (i.e., reliability between evaluators). The kappa coefficient is more significant than the gross percentage of compliance, since the latter does not take into account agreements solely on the basis of chance. If the measures are quantitative, the intraclass correlation coefficient (ICC) should be used to assess the match, but this should be done with caution as there are different CCIs, so it is important to describe the model and type of CCI used. The Bland-Altman method can be used to assess consistency and compliance, but its application should be limited to comparing two evaluators. The Cohen-Kappa test should be used to assess consistency between evaluators (i.e., reliability between evaluators) for qualitative variables (nominal or ordinal). .
The Bland-Altman method can be used to assess consistency and compliance, but its application should be limited to comparing two evaluators. .