上海交通大学学报 ›› 2025, Vol. 59 ›› Issue (1): 70-78.doi: 10.16183/j.cnki.jsjtu.2023.188

• 船舶海洋与建筑工程 • 上一篇    下一篇

基于高斯过程回归和深度强化学习的水下扑翼推进性能寻优方法

杨映荷1, 魏汉迪1,2(), 范迪夏3, 李昂3   

  1. 1.上海交通大学 海洋工程国家重点实验室,上海 200240
    2.上海交通大学三亚崖州湾深海科技研究院,海南 三亚 572024
    3.西湖大学 工学院,杭州 310024
  • 收稿日期:2023-05-11 修回日期:2023-06-14 接受日期:2023-06-19 出版日期:2025-01-28 发布日期:2025-02-06
  • 通讯作者: 魏汉迪,助理研究员;E-mail: weihandi@sjtu.edu.cn.
  • 作者简介:杨映荷(2001—),硕士生,主要研究方向为流体智能控制.
  • 基金资助:
    国家自然科学基金(42206192);国家自然科学基金(52031006);海南省自然科学基金项目(521QN275);三亚崖州湾科技城科研项目(SKJC-2021-01-003)

Optimization Method of Underwater Flapping Foil Propulsion Performance Based on Gaussian Process Regression and Deep Reinforcement Learning

YANG Yinghe1, WEI Handi1,2(), FAN Dixia3, LI Ang3   

  1. 1. State Key Laboratory of Ocean Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
    2. SJTU Yazhou Bay Institute of Deepsea Sci-Tech, Sanya 572024, Hainan, China
    3. School of Engineering, Westlake University, Hangzhou 310024, China
  • Received:2023-05-11 Revised:2023-06-14 Accepted:2023-06-19 Online:2025-01-28 Published:2025-02-06

摘要:

为了克服水下工作环境的复杂多变性,以及扑翼运动本身存在控制难度高、变量多、非线性特征显著等问题,提出一种直接探索环境并选取相应最优扑翼推进运动参数的寻优方法.采用拉丁超采样技术获取多维扑翼参数在实际水池中的数据样本,并基于该数据使用高斯过程回归(GPR)算法建立泛化工作环境的非参数模型.在不同推进性能需求下,采用深度强化学习(DRL)中的TD3算法并以奖励最大化为目标,训练得出连续区间内多参数动作最优组合解.实验结果表明,该GPR-TD3方法可以习得实验环境下扑翼推进的全定义域内最优解,包括最大速度和最大效率,并且该最优解可以在GPR中以二维形式直观验证其准确性.同时,针对任意给出的推进速度要求值,在290组真实样本前提下,新算法能够给出误差范围为0.23%~6.68%的推荐动作组合解,为真实应用提供参考.

关键词: 水下扑翼, 高斯过程回归, 深度强化学习, 推进性能寻优

Abstract:

In order to overcome the complexity and variability of underwater working environments, as well as the difficulty of controlling the flapping motion due to the significant nonlinear characteristics and numerous variables involved, a direct exploration approach is proposed to search for the optimal flapping foil propulsion parameters in the environment. The Latin hypercube sampling technique is utilized to obtain the samples of multi-dimensional flapping parameters in actual water pool data, and a Gaussian process regression (GPR) machine learning model is established based on these samples to generalize the working environment. Under different propulsion performance requirements, the TD3 algorithm in deep reinforcement learning (DRL) is trained for maximizing rewards and obtaining the optimal combination of multiple parameter actions in continuous intervals. The experimental results demonstrate that the GPR-TD3 method is capable of learning the globally optimal solution for flapping propulsion in the experimental environment, including maximum speed and maximum efficiency. Furthermore, the accuracy of this optimal solution can be intuitively verified through a two-dimensional contour plot in the GPR. Meanwhile, with 290 sets of real samples provided for any given propulsion speed requirement, the agent can recommend a set of action combinations with an error range of 0.23% to 6.68%, which can provide reference for practical applications.

Key words: underwater flapping foil, Gaussian process regression (GPR), deep reinforcement learning (DRL), propulsion performance optimization

中图分类号: