电子信息与电气工程

基于自适应特征增强和融合的舰载机着舰拉制状态识别

  • 王可 ,
  • 刘奕阳 ,
  • 杨杰 ,
  • 鲁爱国 ,
  • 李哲 ,
  • 徐明亮
展开
  • 1.郑州大学 计算机与人工智能学院,郑州 450001
    2.国家超级计算郑州中心,郑州 450001
    3.智能集群系统教育部工程研究中心,郑州 450001
    4.武汉数字工程研究所,武汉 430074
王 可(1985—),博士,讲师,从事机器学习、神经计算理论与应用研究.

收稿日期: 2023-06-25

  修回日期: 2023-06-28

  录用日期: 2023-07-11

  网络出版日期: 2025-03-11

基金资助

国家自然科学基金(62036010);国防科技工业海洋防务技术创新中心创新基金(JJ-2022-709-01);中国博士后科学基金(2020M682348);河南省自然科学基金(232300421235)

Landing State Recognition of Carrier-Based Aircraft Based on Adaptive Feature Enhancement and Fusion

  • WANG Ke ,
  • LIU Yiyang ,
  • YANG Jie ,
  • LU Aiguo ,
  • LI Zhe ,
  • XU Mingliang
Expand
  • 1. School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou 450001, China
    2. National Supercomputing Center in Zhengzhou, Zhengzhou 450001, China
    3. Intelligent Swarm System Engineering Research Center of the Ministry of Education, Zhengzhou 450001, China
    4. Wuhan Digital Engineering Institute, Wuhan 430074, China

Received date: 2023-06-25

  Revised date: 2023-06-28

  Accepted date: 2023-07-11

  Online published: 2025-03-11

摘要

拉制状态识别能辅助着舰信号官及时准确地形成后续指挥决策,是舰载机着舰引导的重要环节.提出一种基于自适应特征增强和融合的拉制状态识别方法,包含基于注意力机制的特征增强模块,通过分割特征图、串联空间域和通道域增强视觉表征能力;利用多尺度特征融合模块,将高分辨率浅层特征与语义信息丰富的深层特征进行融合,充分利用上下文语义信息.基于所提方法,设计基于可穿戴增强现实设备的着舰拉制状态识别原型系统;构建着舰作业虚实融合数据集以评估方法性能.结果表明,所提算法综合性能优于基线算法,能满足拉制状态识别的应用需求.

本文引用格式

王可 , 刘奕阳 , 杨杰 , 鲁爱国 , 李哲 , 徐明亮 . 基于自适应特征增强和融合的舰载机着舰拉制状态识别[J]. 上海交通大学学报, 2025 , 59(2) : 274 -282 . DOI: 10.16183/j.cnki.jsjtu.2023.263

Abstract

The recognition of engagement state aids landing signal officers in formulating command decisions promptly and precisely, which is crucial for guiding carrier-based aircraft landings. A method is proposed for recognizing the engagement state, leveraging adaptive feature enhancement and fusion, which includes an attention mechanism-based feature enhancement module and a multi-scale feature fusion module. The front module enhances visual representation by segmenting feature maps and concatenating spatial and channel domains, and the back module merges high-resolution shallow features with semantically rich deep features to fully utilize contextual information. A prototype system is developed to recognize landing engagement states based on the wearable augmented reality devices. To evaluate the performance of the method proposed, hybrid datasets of landing operations are constructed. The results show that the proposed algorithm outperforms baseline algorithms and meets the application requirements of engagement state recognition.

参考文献

[1] 王可, 徐明亮, 李亚飞, 等. 一种面向航空母舰甲板运动状态预估的鲁棒学习模型[J]. 自动化学报, 2024, 50(9): 1785-1793.
  WANG Ke, XU Mingliang, LI Yafei, et al. A robust learning model for deck motion prediction of aircraft carrier[J]. Acta Automatica Sinica, 2024, 50(9): 1785-1793.
[2] 李亚飞, 吴庆顺, 徐明亮, 等. 基于强化学习的舰载机保障作业实时调度方法[J]. 中国科学: 信息科学, 2021, 51(2): 247-262.
  LI Yafei, WU Qingshun, XU Mingliang, et al. Real-time scheduling for carrier-borne aircraft support operations: A reinforcement learning approach[J]. Scientia Sinica (Informationis), 2021, 51(2): 247-262.
[3] 薛均晓, 徐明亮, 李亚飞, 等. 面向航空母舰电子显灵板的多智能体建模技术研究进展[J]. 计算机辅助设计与图形学学报, 2021, 33(10): 1475-1485.
  XUE Junxiao, XU Mingliang, LI Yafei, et al. Research progress of multi-agent technology for aircraft carrier electronic display panel[J]. Journal of Computer-Aided Design & Computer Graphics, 2021, 33(10): 1475-1485.
[4] 王华, 韩璐, 楚世理, 等. 基于Frenet标架下三维元胞自动机的航母舰载机集群运动建模[J]. 计算机辅助设计与图形学学报, 2018, 30(9): 1719-1727.
  WANG Hua, HAN Lu, CHU Shili, et al. Shipboard aircraft swarm modeling using a 3D cellular automata model under the frenet frame[J]. Journal of Computer-Aided Design & Computer Graphics, 2018, 30(9): 1719-1727.
[5] 江驹, 王新华, 甄子洋. 舰载机起飞着舰引导与控制[M]. 北京: 科学出版社, 2019.
  JIANG Ju, WANG Xinhua, ZHEN Ziyang. Guidance and control of carrier-based aircraft taking off and landing[M]. Beijing: Science Press, 2019.
[6] 薛均晓, 孔祥燕, 郭毅博, 等. 基于深度强化学习的舰载机动态避障方法[J]. 计算机辅助设计与图形学学报, 2021, 33(7): 1102-1112.
  XUE Junxiao, KONG Xiangyan, GUO Yibo, et al. Dynamic obstacle avoidance method for carrier aircraft based on deep reinforcement learning[J]. Journal of Computer-Aided Design & Computer Graphics, 2021, 33(7): 1102-1112.
[7] 汪丁, 黄葵, 朱兴动, 等. 基于改进YOLOv4-tiny的舰面多目标检测算法[J]. 兵工自动化, 2022, 41(10): 1-6.
  WANG Ding, HUANG Kui, ZHU Xingdong, et al. Multi-target detection algorithm for ship surface based on improved YOLOv4-tiny[J]. Ordnance Industry Automation, 2022, 41(10): 1-6.
[8] 范加利, 田少兵, 黄葵, 等. 基于Faster R-CNN的航母舰面多尺度目标检测算法[J]. 系统工程与电子技术, 2022, 44(1): 40-46.
  FAN Jiali, TIAN Shaobing, HUANG Kui, et al. Multi-scale object detection algorithm for aircraft carrier surface based on Faster R-CNN[J]. Systems Engineering and Electronics, 2022, 44(1): 40-46.
[9] 朱兴动, 汪丁, 范加利, 等. 复杂场景下基于增强YOLOv3的舰面多目标检测[J]. 计算机工程与应用, 2022, 58(13): 177-184.
  ZHU Xingdong, WANG Ding, FAN Jiali, et al. Multitarget detection based on enhanced YOLOv3 in complex scenarios[J]. Computer Engineering and Applications, 2022, 58(13): 177-184.
[10] XU P C, GUO Z Y, LIANG L, et al. MSF-net: Multi-scale feature learning network for classification of surface defects of multifarious sizes[J]. Sensors, 2021, 21(15): 5125.
[11] LIN T Y, DOLLáR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017: 936-944.
[12] ZHANG S F, CHI C, YAO Y Q, et al. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020: 9756-9765.
[13] GUO D Z, ZHU L G, LU Y H, et al. Small object sensitive segmentation of urban street scene with spatial adjacency between object classes[J]. IEEE Transactions on Image Processing, 2018: 28(6): 2643-2653.
[14] 谢磊, 丁达理, 魏政磊, 等. AdaBoost-PSO-LSTM网络实时预测机动轨迹[J]. 系统工程与电子技术, 2021, 43(6): 1651-1658.
  XIE Lei, DING Dali, WEI Zhenglei, et al. Real time prediction of maneuver trajectory for AdaBoost-PSO-LSTM network[J]. Systems Engineering and Electronics, 2021, 43(6): 1651-1658.
[15] LIU W, ANGUELOV D, ERHAN D, et al. SSD: Single shot MultiBox detector[M]//Computer Vision-ECCV 2016. Cham: Springer, 2016: 21-37.
[16] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]// 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017: 2999-3007.
[17] REDMON J, FARHADI A. YOLOv3: An incremental improvement[DB/OL]. (2018-04-08)[2022-06-09]. https://arxiv.org/abs/1804.02767.pdf.
[18] FU C Y, LIU W, RANGA A, et al. DSSD: Deconvolutional single shot detector[DB/OL]. (2017-01-23)[2022-07-10]. https://arxiv.org/abs/1701.06659.pdf.
文章导航

/