Altan, Zeynep2026-01-312026-01-312025979833151089397983315108862996-4385https://doi.org./10.1109/ICHORA65333.2025.11017150https://hdl.handle.net/20.500.12662/107217th International Congress on Human-Computer Interaction, Optimization and Robotic Applications-ICHORA -- MAY 23-24, 2025 -- Ankara, TURKIYEThe aim of this study is to observe the impact of intelligent systems on societal well-being and to uncover the effects of developments in computability on artificial intelligence in order to achieve more understandable and reliable results. Physical Symbol System hypothesis, one of the significant definitions of the 20th century, is considered in terms of its influence on the development of learning algorithms. The origin of this discourse can be traced back to the definition of Information Theory. Formal logic also emerged from such a system. With the definition of basic mathematical concepts through logic in the form of proof and inference, a new solution method with syntactic rules has begun to be used. Symbolic methods that enable inductive reasoning and logical learning explain the interpretability of applications. When the amount of data is not big and real datasets are used, the solution performs well. The effects of these symbolic systems have evolved with sub-symbolic systems in Human-Computer Interaction, transforming into successful applications of Deep Learning. Graph Neural Networks, which belong to the sub-symbolic methods classification, use big datasets. On the other hand, Explainability, which constitutes the fundamental functionality of comprehensive philosophical analysis, acts as a complementary element in forming the cornerstone of human-centered learning in today's applications. In this study, Explainability is examined from the perspective of Information Theory, and current Artificial Intelligence applications are discussed in the historical development context of the philosophy of science. Moreover, Social Psychology which constitutes a synthesis of general abilities related to human behavior guides the design of Deep Neural Network algorithms. Regardless of the method used to reach the solution of any problem, there will always be some phenomena that cannot be expressed in words. For example, a statement made in stock market prediction will not satisfy many users as it is reductionist and synthetic. From this, it can be inferred that systems with Emotional Intelligence are the future of AI.eninfo:eu-repo/semantics/closedAccessDeep LearningExplainable Artificial IntelligenceHuman-Computer InteractionLogic and ReasoningNeural NetworksSocial PsychologyComputability and Explainability on the Evolution of Intelligent SystemsConference Object10.1109/ICHORA65333.2025.110171502-s2.0-105008419427N/AWOS:001533792800150N/A