http://scholars.ntou.edu.tw/handle/123456789/24414
Title: | Deep Reinforcement Learning-Based Robot Exploration for Constructing Map of Unknown Environment | Authors: | 陳世曄 He, Qi-Fong Lai, Chin-Feng |
Issue Date: | 2021 | Source: | Information Systems Frontiers | Abstract: | In traditional environment exploration algorithms, two problems are still waiting to be solved. One is that as the exploration time increases, the robot will repeatedly explore the areas that have been explored. The other is that in order to explore the environment more accurately, the robot will cause slight collisions during the exploration process. In order to solve the two problems, a DQN-based exploration model is proposed, which enables the robot to quickly find the unexplored area in an unknown environment, and designs a DQN-based navigation model to solve the local minima problem generated by the robot during the exploration. Through the switching mechanism of exploration model and navigation model, the robot can quickly complete the exploration task through selecting the modes according to the environment exploration situation. In the experiment results, the difference between the proposed unknown environment exploration method and the previous known-environment exploration methods research is less than 5% under the same exploration time. And in the proposed method, the robot can achieve zero collision and almost zero repeated exploration of the area when it has been trained for 30w rounds. Therefore, it can be seen that the proposed method is more practical than the previous methods. |
URI: | http://scholars.ntou.edu.tw/handle/123456789/24414 | ISSN: | 1387-3326 1572-9419 |
DOI: | 10.1007/s10796-021-10218-5 |
Appears in Collections: | 資訊工程學系 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.