Detection of damaged buildings after an earthquake with convolutional neural networks in conjunction with image segmentation


Unlu R., Kiris R.

VISUAL COMPUTER, cilt.38, sa.2, ss.685-694, 2022 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 38 Sayı: 2
  • Basım Tarihi: 2022
  • Doi Numarası: 10.1007/s00371-020-02043-9
  • Dergi Adı: VISUAL COMPUTER
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Applied Science & Technology Source, Computer & Applied Sciences, INSPEC, zbMATH
  • Sayfa Sayıları: ss.685-694
  • Anahtar Kelimeler: VGG16, VGG-19, NASNet, Transfer learning, Damaged building detection
  • Abdullah Gül Üniversitesi Adresli: Evet

Özet

Detecting damaged buildings after an earthquake as quickly as possible is important for emergency teams to reach these buildings and save the lives of many people. Today, damaged buildings after the earthquake are carried out by the survivors contacting the authorities or using some air vehicles such as helicopters. In this study, AI-based systems were tested to detect damaged or destroyed buildings by integrating into street camera systems after unexpected disasters. For this purpose, we have used VGG-16, VGG-19, and NASNet convolutional neural network models which are often used for image recognition problems in the literature to detect damaged buildings. In order to effectively implement these models, we have first segmented all the images with the K-means clustering algorithm. Thereafter, for the first phase of this study, segmented images labeled "damaged buildings" and "normal" were classified and the VGG-19 model was the most successful model with a 90% accuracy in the test set. Besides, as the second phase of the study, we have created a multiclass classification problem by labeling segmented images as "damaged buildings," "less damaged buildings," and "normal." The same three architectures are used to achieve the most accurate classification results on the test set. VGG-19 and VGG-16, and NASNet have achieved considerable success in the test set with about 70%, 67%, and 62% accuracy, respectively.