Intelligent Robots

Haptic-Aided Navigation Vehicle: Enhancing Obstacle Detection in Blind Spots and Transparent Object Scenarios

Expand
  • 1. School of Software, Southeast University, Suzhou 215000, Jiangsu, China; 2. School of Automation, Southeast University, Nanjing 210018, China; 3. School of Cyber Science and Engineering, Southeast University, Nanjing 211102, China; 4. Key Laboratory of Measurement and Control of Complex Systems of Engineering of Ministry of Education, Nanjing 214135, China; 5. Nanjing Center for Applied Mathematics, Nanjing 211135, China; 6. Southeast University Shenzhen Research Institute, Shenzhen 518063, Guangdong, China

Received date: 2024-11-13

  Accepted date: 2024-12-02

  Online published: 2026-02-12

Abstract

As autonomous mobile robots are increasingly deployed in complex environments, traditional vision sensors and LiDAR encounter considerable limitations, particularly in detecting obstacles in blind spots or transparent objects. To address the issue of blind spots, we design a specialized haptic sensing structure and develop the haptic-aided navigation vehicle (HANV). This system integrates haptic sensors and LiDAR to deliver comprehensive perception, significantly enhancing close-range obstacle detection in areas that are typically beyond the range of conventional sensors. To tackle the challenge of transparent obstacles, which are often undetected by both vision and LiDAR sensors, we employ a fusion of haptic sensors and LiDAR. The haptic system provides physical contact feedback, ensuring reliable detection of transparent obstacles such as glass, while LiDAR offers long-range sensing capabilities. This combination enables HANV to navigate effectively in environments with transparent obstacles, overcoming the limitations of traditional sensing systems. Experiment results indicate that the proposed haptic and LiDAR integration substantially improves obstacle detection in both blind spots and environments with transparent obstacles. HANV achieves high success rates, minimal collisions, and efficient obstacle avoidance, particularly excelling in complex, confined spaces where conventional systems prove inadequate. These findings emphasize the efficacy of our approach in enhancing navigation performance in dynamic and challenging environments.

Cite this article

Li Mingwang, Li Xinde, Zhang Zhentong, Wang Zeyu, Zhao Haoming . Haptic-Aided Navigation Vehicle: Enhancing Obstacle Detection in Blind Spots and Transparent Object Scenarios[J]. Journal of Shanghai Jiaotong University(Science), 2026 , 31(1) : 167 -175 . DOI: 10.1007/s12204-025-2807-7

References

[1] TONG J, XING X Y, GUO R Q, et al. Performance limitations analysis of visual sensors in low light conditions based on field test [C]// SAE 2022 Intelligent and Connected Vehicles Symposium. Shanghai: SAE, 2022: 2022-01-7086.

[2] ZHU K, ZHANG T. Deep reinforcement learning based mobile robot navigation: A review [J]. Tsinghua Science and Technology, 2021, 26(5): 674-691.

[3] CHEN Q P, XIE Y F, GUO S F, et al. Sensing system of environmental perception technologies for driverless vehicle: A review of state of the art and challenges [J]. Sensors and Actuators A: Physical, 2021, 319: 112566.

[4] PAWAR K S, TELI S N, SHETYE P, et al. Blind-spot monitoring system using lidar [J]. Journal of the Institution of Engineers (India): Series C, 2022, 103(5): 1071-1082.

[5] KWON D, MALAIYA R, YOON G, et al. A study on development of the camera-based blind spot detection system using the deep learning methodology [J]. Applied Sciences, 2019, 9(14): 2941.

[6] LUO C M, YANG S X, MO H W, et al. Safety aware robot coverage motion planning with virtual-obstacle-based navigation [C]//2015 IEEE International Conference on Information and Automation. Lijiang: IEEE, 2015: 2110-2115.

[7] LI X D, ZHANG X L, ZHU B, et al. A visual navigation method of mobile robot using a sketched semantic map [J]. International Journal of Advanced Robotic Systems, 2012, 9(4): 138.

[8] LUNECKAS M, LUNECKAS T, UDRIS D, et al. A hybrid tactile sensor-based obstacle overcoming method for hexapod walking robots [J]. Intelligent Service Robotics, 2021, 14(1): 9-24.

[9] LE N M D, NGUYEN N H, NGUYEN D A, et al. ViART: Vision-based soft tactile sensing for autonomous robotic vehicles [J]. IEEE/ASME Transactions on Mechatronics, 2024, 29(2): 1420-1430.

[10] KOSSAS T, REMMAS W, GKLIVA R, et al. Whisker-based tactile navigation algorithm for underground robots [C]//2024 IEEE International Conference on Robotics and Automation. Yokohama. IEEE, 2024: 13164-13170.

[11] YU Z Q, GUO Y, SU J J, et al. Bioinspired, multifunctional, active whisker sensors for tactile sensing of mobile robots [J]. IEEE Robotics and Automation Letters, 2022, 7(4): 9565-9572.

[12] OTA K, JHA D K, JATAVALLABHULA K M, et al. Tactile estimation of extrinsic contact patch for stable placement [C]//2024 IEEE International Conference on Robotics and Automation. Yokohama: IEEE, 2024: 13876-13882.

[13] LI Y, IBANEZ-GUZMAN J. Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems [J]. IEEE Signal Processing Magazine, 2020, 37(4): 50-61.

[14] LI X D, LUO C M, XU Y F, et al. A Fuzzy PID controller applied in AGV control system [C]//2016 International Conference on Advanced Robotics and Mechatronics. Macau: IEEE, 2016: 555-560.

[15] WANG Y, KANG J R, CHEN Z H, et al. Terrestrial locomotion of PogoX: From hardware design to energy shaping and step-to-step dynamics based control [C]//2024 IEEE International Conference on Robotics and Automation. Yokohama: IEEE, 2024: 3419-3425.

[16] LIU M, LI Z, YANG X D, et al. Dynamic analysis of a deployable/retractable damped cantilever beam [J]. Applied Mathematics and Mechanics, 2020, 41(9): 1321-1332.

[17] FUJIMOTO S, HOOF H V, MEGER D. Addressing function approximation error in actor-critic methods [C]// 35th International Conference on Machine Learning. Stockholm, PMLR, 2018: 1587-1596.

[18] CIMURS R, SUH I H, LEE J H. Goal-driven autonomous exploration through deep reinforcement learning [J]. IEEE Robotics and Automation Letters, 2022, 7(2): 730-737.

[19] LI Y J, WEI C Y, XIA Y. Lightweight multimodal fusion for autonomous navigation via deep reinforcement learning [M]// Proceedings of 3rd 2023 International Conference on Autonomous Unmanned Systems (3rd ICAUS 2023). Singapore: Springer, 2024: 78-87.

[20] LI X H, LI X D, KHYAM M O, et al. Visual navigation method for indoor mobile robot based on extended BoW model [J]. CAAI Transactions on Intelligence Technology, 2017, 2(4): 142-147.

Outlines

/