Download PDFOpen PDF in browserBND*-DDQN: Learn to Steer Autonomously through Deep Reinforcement LearningEasyChair Preprint 494114 pages•Date: January 30, 2021AbstractIt is vital for mobile robots to achieve safe autonomous steering in various changing environments. In this paper, a novel end-to-end network architecture is proposed for mobile robots to learn steering autonomously through deep reinforcement learning. Specifically, two sets of feature representations are firstly extracted from the depth inputs through two different input streams. The acquired features are then merged together to derive both linear and angular actions simultaneously. Moreover, a new action selection strategy is also introduced to achieve motion filtering by taking the consistency in angular velocity into account. Besides, in addition to the extrinsic rewards, the intrinsic bonuses are also adopted during training to improve the exploration capability. Furthermore, it is worth noting the proposed model is readily transferable from the simple virtual training environment to much more complicated realworld scenarios so that no further fine-tuning is required for real deployment. Compared to the existing methods, the proposed method demonstrates significant superiority in terms of average reward, convergence speed, success rate, and generalization capability. In addition, it exhibits outstanding performance in various cluttered real-world environments containing both static and dynamic obstacles. A video of our experiments can be found at https://youtu.be/19jrQGG1oCU. Keyphrases: Autonomous steering, Deep Reinforcement Learning, depth image, difference image, intrinsic reward
|