A Vision-Transformer-Based Approach to Clutter Removal in GPR: DC-ViT

dc.authoridKAYACAN, YAVUZ EMRE/0000-0002-8951-1266
dc.contributor.authorKayacan, Yavuz Emre
dc.contributor.authorErer, Isin
dc.date.accessioned2025-03-09T10:48:50Z
dc.date.available2025-03-09T10:48:50Z
dc.date.issued2024
dc.departmentİstanbul Beykent Üniversitesi
dc.description.abstractSince clutter encountered in ground-penetrating radar (GPR) systems deteriorates the performance of target detection algorithms, clutter removal is an active research area in the GPR community. In this letter, instead of convolutional neural network (CNN) architectures used in the recently proposed deep-learning-based clutter removal methods, we introduce declutter vision transformers (DC-ViTs) to remove the clutter. Transformer encoders in DC-ViT provide an alternative to CNNs which has limitations to capture long-range dependencies due to its local operations. In addition, the implementation of a convolutional layer instead of multilayer perceptron (MLP) in the transformer encoder increases the capturing ability of local dependencies. While deep features are extracted with blocks consisting of transformer encoders arranged sequentially, losses during information flow are reduced using dense connections between these blocks. Our proposed DC-ViT was compared with low-rank and sparse methods such as robust principle component analysis (RPCA), robust nonnegative matrix factorization (RNMF), and CNN-based deep networks such as convolutional autoencoder (CAE) and CR-NET. In comparisons made with the hybrid dataset, DC-ViT is 2.5% better in peak signal-to-noise ratio (PSNR) results than its closest competitor. As a result of the tests, we conducted using our experimental GPR data, and the proposed model provided an improvement of up to 20%, compared with its closest competitor in terms of signal-to-clutter ratio (SCR).
dc.description.sponsorshipScientific and Technological Research Council of Trkiye (TUBITAK)
dc.description.sponsorshipNo Statement Available
dc.identifier.doi10.1109/LGRS.2024.3385694
dc.identifier.issn1545-598X
dc.identifier.issn1558-0571
dc.identifier.scopus2-s2.0-85190174798
dc.identifier.scopusqualityQ1
dc.identifier.urihttps://doi.org/10.1109/LGRS.2024.3385694
dc.identifier.urihttps://hdl.handle.net/20.500.12662/4671
dc.identifier.volume21
dc.identifier.wosWOS:001208314900004
dc.identifier.wosqualityQ1
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherIeee-Inst Electrical Electronics Engineers Inc
dc.relation.ispartofIeee Geoscience and Remote Sensing Letters
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.snmzKA_WOS_20250310
dc.subjectClutter removal
dc.subjectdeep learning
dc.subjectground-penetrating radar (GPR)
dc.subjectvision transformers (ViTs)
dc.titleA Vision-Transformer-Based Approach to Clutter Removal in GPR: DC-ViT
dc.typeArticle

Dosyalar