Intelligent Connected Vehicle

Camera-Radar Fusion Sensing System Based on Multi-Layer Perceptron

Expand
  • (a. Department of Automation; b. University of Michigan - Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China)

Received date: 2021-02-05

  Online published: 2021-10-28

Abstract

Environmental perception is a key technology for autonomous driving. Owing to the limitations of a single sensor, multiple sensors are often used in practical applications. However, multi-sensor fusion faces some problems, such as the choice of sensors and fusion methods. To solve these issues, we proposed a machine learning-based fusion sensing system that uses a camera and radar, and that can be used in intelligent vehicles. First, the object detection algorithm is used to detect the image obtained by the camera; in sequence, the radar data is preprocessed, coordinate transformation is performed, and a multi-layer perceptron model for correlating the camera detection results with the radar data is proposed. The proposed fusion sensing system was verified by comparative experiments in a real-world environment. The experimental results show that the system can effectively integrate camera and radar data results, and obtain accurate and comprehensive object information in front of intelligent vehicles.

Cite this article

YAO Tong (姚 彤), WANG Chunxiang(王春香), QIAN Yeqiang(钱烨强) . Camera-Radar Fusion Sensing System Based on Multi-Layer Perceptron[J]. Journal of Shanghai Jiaotong University(Science), 2021 , 26(5) : 561 -568 . DOI: 10.1007/s12204-021-2345-x

References

[1] YURTSEVER E, LAMBERT J, CARBALLO A, et al. A survey of autonomous driving: Common practices and emerging technologies [J]. IEEE Access, 2020, 8: 58443-58469. [2] FAYYAD J, JARADAT M A, GRUYER D, et al. Deep learning sensor fusion for autonomous vehicle percep-tion and localization: A review [J]. Sensors (Basel, Switzerland), 2020, 20(15): E4220. [3] ALESSANDRETTI G, BROGGI A, CERRI P. Vehicle and guard rail detection using radar and vision data fu-sion [J]. IEEE Transactions on Intelligent Transporta-tion Systems, 2007, 8(1): 95-105. [4] CHAVEZ-GARCIA R O, AYCARD O. Multiple sen-sor fusion and classi?cation for moving object detec-tion and tracking [J]. IEEE Transactions on Intelligent Transportation Systems, 2016, 17(2): 525-534. [5] KIMB, KIMD,PARKS,etal. Automatedcomplex urban driving based on enhanced environment repre-sentation with GPS/map, radar, lidar and vision [J]. IFAC-PapersOnLine, 2016, 49(11): 190-195. [6] PANG S, MORRIS D, RADHA H. CLOCs: Camera-LiDAR object candidates fusion for 3D object detec-tion [C]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Las Vegas, NV: IEEE, 2020: 10386-10393. [7] VIOLA P, JONES M J. Robust real-time face detec-tion [J]. International Journal of Computer Vision, 2004, 57(2): 137-154. [8] DALAL N, TRIGGS B. Histograms of oriented gradi-ents for human detection [C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, CA: IEEE, 2005: 886-893. [9] FELZENSZWALB P F, GIRSHICK R B, MCALLESTER D, et al. Object detection with discriminatively trained part-based models [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(9): 1627-1645. [10] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation [C]//2014 IEEE Confer-ence on Computer Vision and Pattern Recognition. Columbus, OH: IEEE, 2014: 580-587. [11] GIRSHICK R. Fast R-CNN [C]//2015 IEEE Inter-national Conference on Computer Vision. Santiago: IEEE, 2015: 1440-1448. [12] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. [13] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Uni?ed, real-time object detection [C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV: IEEE, 2016: 779-788. [14] REDMON J, FARHADI A. YOLO9000: Better, faster, stronger [C]//2017 IEEE Conference on Com-puter Vision and Pattern Recognition. Honolulu, HI: IEEE, 2017: 6517-6525. [15] REDMON J, FARHADI A. YOLOv3: An incremen-tal improvement [C]//2018 IEEE Conference on Com-puter Vision and Pattern Recognition. Salt Lake City, UT: IEEE, 2018: 2513-2520.
Outlines

/