A Vision-Transformer-Based Approach to Clutter Removal in GPR: DC-ViT

Küçük Resim Yok

Tarih

2024

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Ieee-Inst Electrical Electronics Engineers Inc

Erişim Hakkı

info:eu-repo/semantics/closedAccess

Özet

Since clutter encountered in ground-penetrating radar (GPR) systems deteriorates the performance of target detection algorithms, clutter removal is an active research area in the GPR community. In this letter, instead of convolutional neural network (CNN) architectures used in the recently proposed deep-learning-based clutter removal methods, we introduce declutter vision transformers (DC-ViTs) to remove the clutter. Transformer encoders in DC-ViT provide an alternative to CNNs which has limitations to capture long-range dependencies due to its local operations. In addition, the implementation of a convolutional layer instead of multilayer perceptron (MLP) in the transformer encoder increases the capturing ability of local dependencies. While deep features are extracted with blocks consisting of transformer encoders arranged sequentially, losses during information flow are reduced using dense connections between these blocks. Our proposed DC-ViT was compared with low-rank and sparse methods such as robust principle component analysis (RPCA), robust nonnegative matrix factorization (RNMF), and CNN-based deep networks such as convolutional autoencoder (CAE) and CR-NET. In comparisons made with the hybrid dataset, DC-ViT is 2.5% better in peak signal-to-noise ratio (PSNR) results than its closest competitor. As a result of the tests, we conducted using our experimental GPR data, and the proposed model provided an improvement of up to 20%, compared with its closest competitor in terms of signal-to-clutter ratio (SCR).

Açıklama

Anahtar Kelimeler

Clutter removal, deep learning, ground-penetrating radar (GPR), vision transformers (ViTs)

Kaynak

Ieee Geoscience and Remote Sensing Letters

WoS Q Değeri

Q1

Scopus Q Değeri

Q1

Cilt

21

Sayı

Künye