Reinforcement learning based local path planning for mobile robot


Creative Commons License

Gök M., Tekerek M., Aydemir H.

Interdisciplinary Conference on Mechanics, Computers and Electrics, Ankara, Türkiye, 27 - 28 Kasım 2021, cilt.2021, ss.197-201

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Cilt numarası: 2021
  • Doi Numarası: 10.48550/arxiv.2403.12463
  • Basıldığı Şehir: Ankara
  • Basıldığı Ülke: Türkiye
  • Sayfa Sayıları: ss.197-201
  • Ankara Üniversitesi Adresli: Hayır

Özet

Different methods are used for a mobile robot to go to a specific target location. These methods work in different ways for online and offline scenarios. In the offline scenario, an environment map is created once, and fixed path planning is made on this map to reach the target. Path planning algorithms such as A* and RRT (Rapidly-Exploring Random Tree) are the examples of offline methods. The most obvious situation here is the need to re-plan the path for changing conditions of the loaded map. On the other hand, in the online scenario, the robot moves dynamically to a given target without using a map by using the perceived data coming from the sensors. Approaches such as SFM (Social Force Model) are used in online systems. However, these methods suffer from the requirement of a lot of dynamic sensing data. Thus, it can be said that the need for re-planning and mapping in offline systems and various system design requirements in online systems are the subjects that focus on autonomous mobile robot research. Recently, deep neural network powered Q-Learning methods are used as an emerging solution to the aforementioned problems in mobile robot navigation. In this study, machine learning algorithms with deep Q-Learning (DQN) and Deep DQN architectures, are evaluated for the solution of the problems presented above to realize path planning of an autonomous mobile robot to avoid obstacles.