Evaluation Method, Dataset Size or Dataset Content: How to Evaluate Algorithms for Image Matching?


Creative Commons License

Kanwal N., BOSTANCI G. E., Clark A. F.

JOURNAL OF MATHEMATICAL IMAGING AND VISION, cilt.55, sa.3, ss.378-400, 2016 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 55 Sayı: 3
  • Basım Tarihi: 2016
  • Doi Numarası: 10.1007/s10851-015-0626-4
  • Dergi Adı: JOURNAL OF MATHEMATICAL IMAGING AND VISION
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Sayfa Sayıları: ss.378-400
  • Anahtar Kelimeler: Performance characterization, Feature matching, Homography, FALSE DISCOVERY RATE, MULTIPLE, PERFORMANCE
  • Ankara Üniversitesi Adresli: Evet

Özet

Most vision papers have to include some evaluation work in order to demonstrate that the algorithm proposed is an improvement on existing ones. Generally, these evaluation results are presented in tabular or graphical forms. Neither of these is ideal because there is no indication as to whether any performance differences are statistically significant. Moreover, the size and nature of the dataset used for evaluation will obviously have a bearing on the results, and neither of these factors are usually discussed. This paper evaluates the effectiveness of commonly used performance characterization metrics for image feature detection and description for matching problems and explores the use of statistical tests such as McNemar's test and ANOVA as better alternatives.