An Efficient Approach for Obstacle Avoidance and Navigation in Robots
- Title
- An Efficient Approach for Obstacle Avoidance and Navigation in Robots
- Creator
- Phani Shanmukh M.; Natarajan B.; Kannan C.; Tamilselvi M.; Vigneshwaran T.; Husain S.S.
- Description
- Reinforcement learning has emerged as a prominent technique for enhancing robot obstacle avoidance capabilities in recent years. This research provides a comprehensive overview of reinforcement learning methods, focusing on Bayesian, static, dynamic policy, Deep Q-Learning (DQN) and extended dynamic policy algorithms. In the context of robot obstacle avoidance, these algorithms enable an agent to interact with its physical environment, learns effective operating strategies, and optimize actions to maximize a reward signal. The environment typically consists of a physical space that the robot must navigate without encountering obstacles. The reward signal serves as an objective measure of the robot's performance towards accomplishing specific goals, such as reaching designated positions or completing tasks. Furthermore, successful obstacle avoidance strategies acquired in simulation environments can be seamlessly transferred to real-world scenarios. The promising results achieved thus far indicate the potential of reinforcement learning as a powerful tool for enhancing robot obstacle avoidance. This research concludes with insights into the future prospects of reward learning, high-lighting its ongoing importance in the development of intelligent robotics systems. The proposed algorithm DQN outperforms well among all the other algorithms with an accuracy of 81%, Through this research, we aim to provide valuable insights and directions for further advancements in the field of robot obstacle avoidance using reinforcement learning techniques. 2023 IEEE.
- Source
- International Conference on Integrated Intelligence and Communication Systems, ICIICS 2023
- Date
- 2023-01-01
- Publisher
- Institute of Electrical and Electronics Engineers Inc.
- Subject
- Bayesian algorithms; DQN algorithm; dynamic policy; extended dynamic policy; physical environment; real-world transfer.; reinforcement learning; reward signal; robot obstacle avoidance; simulation; static algorithms
- Coverage
- Phani Shanmukh M., Amrita Vishwa Vidyapeetham, Amrita School of Computing, Department of Computer Science and Engineering (AIE), Chennai, India; Natarajan B., Vit University, School of Computer Science and Engineering, Chennai, India; Kannan C., Cmr Institute of Technology, Department of Computer Science and Engineering, Hyderabad, India; Tamilselvi M., Roever Engineering, Department of Computer Science and Engineering, Perambalur, India; Vigneshwaran T., Christ (Deemed to Be University), School of Engineering and Technology, Department of Cse, Bangalore, India; Husain S.S., K Ramakrishnan College of Engineering, Department of Electronics and Communication Engineering, Tamilnadu, Tiruchirapalli, India
- Rights
- Restricted Access
- Relation
- ISBN: 979-835031545-5
- Format
- Online
- Language
- English
- Type
- Conference paper
Collection
Citation
Phani Shanmukh M.; Natarajan B.; Kannan C.; Tamilselvi M.; Vigneshwaran T.; Husain S.S., “An Efficient Approach for Obstacle Avoidance and Navigation in Robots,” CHRIST (Deemed To Be University) Institutional Repository, accessed February 24, 2025, https://archives.christuniversity.in/items/show/19693.