Automation & Computer Technologies

YOLO-VSF: An Improved YOLO Model by Incorporating Attention Mechanism for Object Detection in Traffic Scenes

Expand
  • 1. School of Computer Science, Beijing Information Science and Technology University, Beijing 100101, China; 2. Beijing VanJee Technology Co., Ltd., Beijing 100193, China; 3. School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, China

Received date: 2023-08-03

  Accepted date: 2023-11-16

  Online published: 2024-07-04

Abstract

Intelligent transportation and autonomous driving systems have made urgent demands on the techniques with high performance on object detection in traffic scenes. This paper proposes an improved object detection model YOLO-VSF over the YOLOv4 model, which is a representative work with excellent performance among YOLO series of object detection models. The main improvement measures include: The backbone feature extraction network CSPDarknet53 of YOLOv4 is replaced with VGG16 to improve the feature extraction capability; SENet attention mechanism is incorporated to improve the salient and correlation feature representation capability; Focal Loss is integrated into the loss function to overcome the sample imbalance problem. In addition, the detection performance of small targets is improved by increasing the resolution of input images. Experimental results show that on the VanJee traffic image dataset provided by Beijing VanJee Technology Co., Ltd., the proposed YOLO-VSF model achieves an average mean accuracy (mAP) of 92.21 percentage points, which improves the mAP by 3.04 percentage points compared with the YOLOv4 model while maintaining the detection speed of the original model. On the UA-DETRAC dataset, the average accuracy of YOLO-VSF is close to that of the latest YOLOv7 model with the number of parameters reduced by 1.329 ×107. The proposed method can provide a support for object detection in traffic scenes.

Cite this article

Miao Jun, Gong Shaocui, Deng Yongqiang, Liang Hao, Li Juanjuan, Qi Honggang, Zhang Maoxuan . YOLO-VSF: An Improved YOLO Model by Incorporating Attention Mechanism for Object Detection in Traffic Scenes[J]. Journal of Shanghai Jiaotong University(Science), 2026 , 31(2) : 334 -347 . DOI: 10.1007/s12204-024-2751-y

References

[1] BAY H, TUYTELAARS T, VAN GOOL L. SURF: Speeded up robust features [M]// Computer Vision – ECCV 2006. Berlin, Heidelberg: Springer, 2006: 404-417.

[2]    DALAL N, TRIGGS B. Histograms of oriented gradients for human detection [C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego: IEEE, 2005: 886-893.

[3] VIOLA P, JONES M. Rapid object detection using a boosted cascade of simple features [C]// 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Kauai: IEEE, 2001: 511-518.

[4] SUYKENS J A K, VANDEWALLE J. Least squares support vector machine classifiers [J]. Neural Processing Letters, 1999, 9(3): 293-300.

[5] FREUND Y, SCHAPIRE R E. A desicion-theoretic generalization of on-line learning and an application to boosting [M]// Computational learning theory. Berlin, Heidelberg: Springer, 1995: 23-37.

[6] FELZENSZWALB P, MCALLESTER D, RAMANAN D. A discriminatively trained, multiscale, deformable part model [C]//2008 IEEE Conference on Computer Vision and Pattern Recognition. Anchorage: IEEE, 2008: 1-8.

[7] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation [C]// 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus: IEEE, 2014: 580-587.

[8] UIJLINGS J R R, VAN DE SANDE K E A, GEVERS T, et al. Selective search for object recognition [J]. International Journal of Computer Vision, 2013, 104(2): 154-171.

[9] GIRSHICK R. Fast R-CNN[C]// IEEE International Conference on Computer Vision. Santiago: IEEE, 2015: 1440-1448.

[10] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.

[11] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection [C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 779-788.

[12] REDMON J, FARHADI A. YOLO9000: Better, faster, stronger [C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 6517-6525.

[13] REDMON J, FARHADI A. YOLOv3: An incremental improvement [DB/OL]. (2018-04-08). http://arxiv.org/abs/1804.02767

[14] BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: Optimal speed and accuracy of object detection [DB/OL]. (2020-04-23). http://arxiv.org/abs/2004.10934

[15] WANG C Y, MARK LIAO H Y, WU Y H, et al. CSPNet: A new backbone that can enhance learning capability of CNN [C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Seattle: IEEE, 2020: 1571-1580.

[16] HE K M, ZHANG X Y, REN S Q, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916.

[17] LIU S, QI L, QIN H F, et al. Path aggregation network for instance segmentation [C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 8759-8768.

[18] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [DB/OL]. (2014-09-04). https://arxiv.org/abs/1409.1556

[19] HU J, SHEN L, ALBANIE S, et al. Squeeze-and-excitation networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011-2023.

[20] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 318-327.

[21] The UA-DETRAC dataset [EB/OL]. [2023-08-03]. https://detrac-db.rit.albany.edu/

[22] LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: Common objects in context [M]//Computer vision – ECCV 2014. Cham: Springer, 2014: 740-755.

[23] EVERINGHAM M, VAN GOOL L, WILLIAMS C K I, et al. The pascal visual object classes (VOC) challenge [J]. International Journal of Computer Vision, 2010, 88(2): 303-338.

[24] HOWARD A G, ZHU M L, CHEN B, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications [DB/OL]. (2017-04-17). http://arxiv.org/abs/1704.04861

[25] SANDLER M, HOWARD A, ZHU M L, et al. MobileNetV2: Inverted residuals and linear bottlenecks [C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 4510-4520.

[26] HOWARD A, SANDLER M, CHEN B, et al. Searching for MobileNetV3 [C]//2019 IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019: 1314-1324.

[27] HAN K, WANG Y H, TIAN Q, et al. GhostNet: More features from cheap operations [C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 1577-1586.

[28] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition [C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770-778.

[29] HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks [C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 2261-2269.

[30] WOO S, PARK J, LEE J Y, et al. CBAM: Convolutional block attention module [M]//Computer vision – ECCV 2018. Cham: Springer, 2018: 3-19.

[31] WANG Q L, WU B G, ZHU P F, et al. ECA-net: Efficient channel attention for deep convolutional neural networks [C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 11531-11539.

[32] WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors [C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver: IEEE, 2023: 7464-7475.

[33] LIU W, ANGUELOV D, ERHAN D, et al. SSD: Single shot multibox detector [M]//Computer vision – ECCV 2016. Cham: Springer, 2016: 21-37.

[34] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words: Transformers for image recognition at scale [DB/OL]. (2020-10-22). https://arxiv.org/abs/2010.11929

[35] LIU Z, LIN Y T, CAO Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows [C]//2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 9992-10002.

[36] LI Y H, YAO T, PAN Y W, et al. Contextual transformer networks for visual recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(2): 1489-1500.


Outlines

/