Computability and Explainability on the Evolution of Intelligent Systems

dc.contributor.authorAltan, Zeynep
dc.date.accessioned2026-01-31T15:08:40Z
dc.date.available2026-01-31T15:08:40Z
dc.date.issued2025
dc.departmentİstanbul Beykent Üniversitesi
dc.description7th International Congress on Human-Computer Interaction, Optimization and Robotic Applications-ICHORA -- MAY 23-24, 2025 -- Ankara, TURKIYE
dc.description.abstractThe aim of this study is to observe the impact of intelligent systems on societal well-being and to uncover the effects of developments in computability on artificial intelligence in order to achieve more understandable and reliable results. Physical Symbol System hypothesis, one of the significant definitions of the 20th century, is considered in terms of its influence on the development of learning algorithms. The origin of this discourse can be traced back to the definition of Information Theory. Formal logic also emerged from such a system. With the definition of basic mathematical concepts through logic in the form of proof and inference, a new solution method with syntactic rules has begun to be used. Symbolic methods that enable inductive reasoning and logical learning explain the interpretability of applications. When the amount of data is not big and real datasets are used, the solution performs well. The effects of these symbolic systems have evolved with sub-symbolic systems in Human-Computer Interaction, transforming into successful applications of Deep Learning. Graph Neural Networks, which belong to the sub-symbolic methods classification, use big datasets. On the other hand, Explainability, which constitutes the fundamental functionality of comprehensive philosophical analysis, acts as a complementary element in forming the cornerstone of human-centered learning in today's applications. In this study, Explainability is examined from the perspective of Information Theory, and current Artificial Intelligence applications are discussed in the historical development context of the philosophy of science. Moreover, Social Psychology which constitutes a synthesis of general abilities related to human behavior guides the design of Deep Neural Network algorithms. Regardless of the method used to reach the solution of any problem, there will always be some phenomena that cannot be expressed in words. For example, a statement made in stock market prediction will not satisfy many users as it is reductionist and synthetic. From this, it can be inferred that systems with Emotional Intelligence are the future of AI.
dc.description.sponsorshipInstitute of Electrical and Electronics Engineers Inc,Ted University
dc.identifier.doi10.1109/ICHORA65333.2025.11017150
dc.identifier.isbn9798331510893
dc.identifier.isbn9798331510886
dc.identifier.issn2996-4385
dc.identifier.scopus2-s2.0-105008419427
dc.identifier.scopusqualityN/A
dc.identifier.urihttps://doi.org./10.1109/ICHORA65333.2025.11017150
dc.identifier.urihttps://hdl.handle.net/20.500.12662/10721
dc.identifier.wosWOS:001533792800150
dc.identifier.wosqualityN/A
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherIeee
dc.relation.ispartof2025 7Th International Congress on Human-Computer Interaction, Optimization And Robotic Applications, Ichora
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.snmzKA_WoS_20260128
dc.subjectDeep Learning
dc.subjectExplainable Artificial Intelligence
dc.subjectHuman-Computer Interaction
dc.subjectLogic and Reasoning
dc.subjectNeural Networks
dc.subjectSocial Psychology
dc.titleComputability and Explainability on the Evolution of Intelligent Systems
dc.typeConference Object

Dosyalar