Search for collections on FTS Digilib

Exploring DQN-Based Reinforcement Learning in Autonomous Highway Navigation Performance Under High-Traffic Conditions

Nugroho, Sandy and Setiadi, De Rosal Ignatius Moses and Islam, Hussain Md Mehedul (2024) Exploring DQN-Based Reinforcement Learning in Autonomous Highway Navigation Performance Under High-Traffic Conditions. Journal of Computing Theories and Applications, 1 (3). pp. 274-286. ISSN 3024-9104

[thumbnail of 9929-Article Text-31994-4-10-20240615.pdf]
Preview
Text
9929-Article Text-31994-4-10-20240615.pdf - Published Version

Download (338kB) | Preview

Abstract

Driving in a straight line is one of the fundamental tasks for autonomous vehicles, but it can become complex and challenging, especially when dealing with high-speed highways and dense traffic conditions. This research aims to explore the Deep-Q Networking (DQN) model, which is one of the reinforcement learning (RL) methods, in a highway environment. DQN was chosen due to its proficiency in handling complex data through integrated neural network approximations, making it capable of addressing high-complexity environments. DQN simulations were conducted across four scenarios, allowing the agent to operate at speeds ranging from 60 to nearly 100 km/h. The simulations featured a variable number of vehicles/obstacles, ranging from 20 to 80, and each simulation had a duration of 40 seconds within the Highway-Env simulator. Based on the test results, the DQN method exhibited excellent performance, achieving the highest reward value in the first scenario, 35.6117 out of a maximum of 40, and a success rate of 90.075%.

Item Type: Article
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Depositing User: dl fts
Date Deposited: 29 Nov 2024 00:57
Last Modified: 29 Nov 2024 01:26
URI: https://dl.futuretechsci.org/id/eprint/45

Actions (login required)

View Item
View Item