Adaptive Stochastic Gradient Descent (SGD) for erratic datasets

dc.authoridDAGAL, IDRISS/0000-0002-2073-8956
dc.authoridAKIN, BURAK/0000-0002-8647-1297
dc.contributor.authorDagal, Idriss
dc.contributor.authorTanrioven, Kursat
dc.contributor.authorNayir, Ahmet
dc.contributor.authorAkin, Burak
dc.date.accessioned2025-03-09T10:49:03Z
dc.date.available2025-03-09T10:49:03Z
dc.date.issued2025
dc.departmentİstanbul Beykent Üniversitesi
dc.description.abstractStochastic Gradient Descent (SGD) is a highly efficient optimization algorithm, particularly well suited for large datasets due to its incremental parameter updates. In this study, we apply SGD to a simple linear classifier using logistic regression, a widely used method for binary classification tasks. Unlike traditional batch Gradient Descent (GD), which processes the entire dataset simultaneously, SGD offers enhanced scalability and performance for streaming and large-scale data. Our experiments reveal that SGD outperforms GD across multiple performance metrics, achieving 45.83% accuracy compared to GD's 41.67 %, and excelling in precision (60 % vs. 45.45 %), recall (100 % vs. 60 %), and F1-score (100 % vs. 62 %). Additionally, SGD achieves 99.99 % of Principal Component Analysis (PCA) accuracy, slightly surpassing GD's 99.92 %. These results highlight SGD's superior efficiency and flexibility for large-scale data environments, driven by its ability to balance precision and recall effectively. To further enhance SGD's robustness, the proposed method incorporates adaptive learning rates, momentum, and logistic regression, addressing traditional GD drawbacks. These modifications improve the algorithm's stability, convergence behavior, and applicability to complex, large-scale optimization tasks where standard GD often struggles, making SGD a highly effective solution for challenging data-driven scenarios.
dc.identifier.doi10.1016/j.future.2024.107682
dc.identifier.issn0167-739X
dc.identifier.issn1872-7115
dc.identifier.scopus2-s2.0-85215429461
dc.identifier.scopusqualityQ1
dc.identifier.urihttps://doi.org/10.1016/j.future.2024.107682
dc.identifier.urihttps://hdl.handle.net/20.500.12662/4709
dc.identifier.volume166
dc.identifier.wosWOS:001404872000001
dc.identifier.wosqualityQ1
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherElsevier
dc.relation.ispartofFuture Generation Computer Systems-The International Journal of Escience
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.snmzKA_WOS_20250310
dc.subjectGradient descent
dc.subjectStochastic Gradient Descent
dc.subjectAccuracy
dc.subjectPrincipal Component Analysis
dc.titleAdaptive Stochastic Gradient Descent (SGD) for erratic datasets
dc.typeArticle

Dosyalar