Yazar "Aliyev, Royal" seçeneğine göre listele
Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe 3D Path Planning Method for Multi-UAVs Inspired by Grey Wolf Algorithms(Library & Information Center, Nat Dong Hwa Univ, 2021) Kiani, Farzad; Seyyedabbasi, Amir; Aliyev, Royal; Shah, Mohammed Ahmed; Gulle, Murat UgurEfficient and collision-free pathfinding, between source and destination locations for multi-Unmanned Aerial Vehicles (UAVs), in a predefined environment is an important topic in 3D Path planning methods. Since path planning is a Non-deterministic Polynomial-time (NP-hard) problem, metaheuristic approaches can be applied to find a suitable solution. In this study, two efficient 3D path planning methods, which are inspired by Incremental Grey Wolf Optimization (I-GWO) and Expanded Grey Wolf Optimization (Ex-GWO), are proposed to solve the problem of determining the optimal path for UAVs with minimum cost and low execution time. The proposed methods have been simulated using two different maps with three UAVs with diverse sets of starting and ending points. The proposed methods have been analyzed in three parameters (optimal path costs, time and complexity, and convergence curve) by varying population sizes as well as iteration numbers. They are compared with well-known different variations of grey wolf algorithms (GWO, mGWO, EGWO, and RWGWO). According to path cost results of the defined case studies in this study, the I-GWO-based proposed path planning method (PPI-GWO) outperformed the best with %36.11. In the other analysis parameters, this method also achieved the highest success compared to the other five methods.Öğe Adapted-RRT: novel hybrid method to solve three-dimensional path planning problem using sampling and metaheuristic-based algorithms(Springer London Ltd, 2021) Kiani, Farzad; Seyyedabbasi, Amir; Aliyev, Royal; Gulle, Murat Ugur; Basyildiz, Hasan; Shah, M. AhmedThree-dimensional path planning for autonomous robots is a prevalent problem in mobile robotics. This paper presents three novel versions of a hybrid method designed to assist in planning such paths for these robots. In this paper, an improvement on Rapidly exploring Random Tree (RRT) algorithm, namely Adapted-RRT, is presented that uses three well-known metaheuristic algorithms, namely Grey Wolf Optimization (GWO), Incremental Grey Wolf Optimization (I-GWO), and Expanded Grey Wolf Optimization (Ex-GWO)). RRT variants, using these algorithms, are named Adapted-RRTGWO, Adapted-RRTI-GWO, and Adapted-RRTEx-GWO. The most significant shortcoming of the methods in the original sampling-based algorithm is their inability in finding the optimal paths. On the other hand, the metaheuristic-based algorithms are disadvantaged as they demand a predetermined knowledge of intermediate stations. This study is novel in that it uses the advantages of sampling and metaheuristic methods while eliminating their shortcomings. In these methods, two important operations (length and direction of each movement) are defined that play an important role in selecting the next stations and generating an optimal path. They try to find solutions close to the optima without collision, while providing comparatively efficient execution time and space complexities. The proposed methods have been simulated employing four different maps for three unmanned aerial vehicles, with diverse sets of starting and ending points. The results have been compared among a total of 11 algorithms. The comparison of results shows that the proposed path planning methods generally outperform various algorithms, namely BPIB-RRT*, tGSRT, GWO, I-GWO, Ex-GWO, PSO, Improved BA, and WOA. The simulation results are analysed in terms of optimal path costs, execution time, and convergence rate.Öğe Hybrid algorithms based on combining reinforcement learning and metaheuristic methods to solve global optimization problems(Elsevier, 2021) Seyyedabbasi, Amir; Aliyev, Royal; Kiani, Farzad; Gulle, Murat Ugur; Basyildiz, Hasan; Shah, Mohammed AhmedThis paper introduces three hybrid algorithms that help in solving global optimization problems using reinforcement learning along with metaheuristic methods. Using the algorithms presented, the search agents try to find a global optimum avoiding the local optima trap. Compared to the classical metaheuristic approaches, the proposed algorithms display higher success in finding new areas as well as exhibiting a more balanced performance while in the exploration and exploitation phases. The algorithms employ reinforcement agents to select an environment based on predefined actions and tasks. A reward and penalty system is used by the agents to discover the environment, done dynamically without following a predetermined model or method. The study makes use of Q-Learning method in all three metaheuristic algorithms, so-called RLI-GWO, RLEx-GWO, and RLWOA algorithms, so as to check and control exploration and exploitation with Q-Table. The Q-Table values guide the search agents of the metaheuristic algorithms to select between the exploration and exploitation phases. A control mechanism is used to get the reward and penalty values for each action. The algorithms presented in this paper are simulated over 30 benchmark functions from CEC 2014, 2015 and the results obtained are compared with well-known metaheuristic and hybrid algorithms (GWO, RLGWO, I-GWO, Ex-GWO, and WOA). The proposed methods have also been applied to the inverse kinematics of the robot arms problem. The results of the used algorithms demonstrate that RLWOA provides better solutions for relevant problems. (C) 2021 Elsevier B.V. All rights reserved.