Comparison of residual and dense neural network approaches for building extraction from high-resolution aerial images |
| |
Affiliation: | Istanbul Technical University, Faculty of Civil Engineering, Department of Geomatics Engineering, 34469 Istanbul, Turkey |
| |
Abstract: | Applications including change detection, disaster management, and urban planning require precise building information, and therefore automatic building extraction has become a significant research topic. With the improvements in sensor and satellite technologies, more data has become available, and with the increased computational power, deep learning methods have emerged as successful tools. In this study, U-Net and FPN architectures using four different backbones (ResNet-50, ResNeXt-50, SE-ResNext-50, and DenseNet-121), and an Attention Residual U-Net approach were used for building extraction from high-resolution aerial images. Two publicly available datasets, Inria Aerial Image Labeling Dataset and Massachusetts Buildings Dataset were used to train and test the models. According to the results, Attention Residual U-Net model has the highest F1 score with 0.8154, IoU score with 0.7102, and test accuracy with 94.51% on the Inria dataset. On the Massachusetts dataset, FPN Dense-Net-121 model has the highest F1 score with 0.7565 and IoU score with 0.6188, and Attention Residual U-Net model has the highest test accuracy with 92.43%. It has been observed that, FPN with DenseNet backbone can be a better choice when working with small size datasets. On the other hand, Attention Residual U-Net approach achieved higher success when a sufficiently large dataset is provided. |
| |
Keywords: | Aerial images Attention gates Residual blocks Dense connections Image segmentation Building extraction |
本文献已被 ScienceDirect 等数据库收录! |
|