基于深度强化学习的纯跟踪控制器响应延迟改进策略
网络出版日期: 2025-05-28
Improved Strategy for Response Delay of Pure Pursuit Controller Based on Deep Reinforcement Learning
Online published: 2025-05-28
陈诗霖, 黄宏成 . 基于深度强化学习的纯跟踪控制器响应延迟改进策略[J]. 上海交通大学学报, 0 : 1 . DOI: 10.16183/j.cnki.jsjtu.2025.001
To mitigate the impact of latency on the Pure Pursuit controller and enhance its accuracy in guiding autonomous vehicles along planned trajectories, this study proposes an optimization method for the Pure Pursuit controller based on deep reinforcement learning (DRL). Specifically, a Deep Deterministic Policy Gradient (DDPG) model is employed to predict real-time vehicle position errors and dynamically adjust the fusion ratio between the steering control signal derived from the Pure Pursuit controller and the planned trajectory's heading angle signal. This approach aims to optimize the steering angle control signal. Simulation experiments conducted in MATLAB under random path conditions demonstrate that the DDPG-based adaptive fusion mechanism significantly improves the control performance of the Pure Pursuit controller. For a vehicle traveling at speeds ranging from 1 m/s to 5 m/s along the planned trajectory, the optimized controller achieves a maximum position error of 0.2 m and a heading angle error of 0.1 rad. Compared to the traditional Pure Pursuit controller, the proposed method reduces lateral errors by 80% and heading angle errors by 90%, thereby validating its effectiveness in enhancing trajectory-tracking precision.
/
〈 |
|
〉 |