IEEE ACCESS, cilt.10, ss.1-16, 2022 (SCI-Expanded)
The final stage of the production process in the industry is quality control. Quality control answers the question of is there a defect on the surface of the products. Frequently the quality control is performed manually. The disadvantages of manual quality control are high error rate (low accuracy), low product rate (low performance) and high expense rate (high cost). The solution is automatic quality control using machine vision systems. These systems classify the products and segment the defects on their surfaces by processing the images taken by cameras during the production process in real-time. Some products like military cartridge cases have metallic, cylindrical, non-uniform texture and highly reflective surface. So, the quality of images is very important. Another factor that affects the accuracy is the non-uniform texture of the product surface. Distinguishing the product non-uniform texture from defect texture is a challenging problem. In previous works, this problem has been tried to be solved with image processing and deep learning techniques and the accuracy of 97% and 96% have been obtained, appropriately. According to NATO standards, the accuracy of the classification of the military cartridge cases should be above 99%. In this work, the methodology for classification of the military cartridge cases and segmentation of the defects on their surfaces with non-uniform texture is proposed to increase the accuracy. In scope of the proposed methodology the datasets with non-defective, defective, and labeled/masked image classes of the cartridge cases were created, the deep learning models to classify the military cartridge cases and segment the defects on their surfaces were proposed, implemented, and obtained results were evaluated using the metrics such as Accuracy, Precision, Recall, F1-Score, Jaccard Index (JI) and Mean Intersection over Union (mIoU). Obtained results showed that the proposed methodology increased the accuracy of classification to 100% with the DenseNet169 model and the F1-Score of segmentation to 92.1% with Improved U-Net and ResUnet models.