一种改进变换网络的域自适应语义分割网络

展开
  • 1.陆军工程大学, 石家庄 050003
    2.陆军指挥学院, 南京 210000
    3.国防大学联合作战学院, 石家庄 050003
    4.32181部队,西安, 710032
    5.军事科学院防化研究院, 北京 102205
张峻宁(1992-),男,四川省巴中市人,博士生,主要从事深度学习,SLAM技术,计算机视觉与模式识别研究

收稿日期: 2019-10-23

  网络出版日期: 2021-10-08

基金资助

国家自然科学基金资助项目(51205405);国家自然科学基金资助项目(51305454)

A Domain Adaptive Semantic Segmentation Network Based on Improved Transformation Network

Expand
  • 1. Army Engineering University, Shijiazhuang 050003, China
    2. Army Command College, Nanjing 210000, China
    3. Joint Operations Academy, National Defense University, Shijiazhuang 050003, China
    4. 32181 Troops, Xi’an 710032, China
    5. Research Institute of Chemical Defense, Academy of Military Sciences, Beijing 102205, China

Received date: 2019-10-23

  Online published: 2021-10-08

摘要

语义标签的人工标注成本高,耗时长,基于域自适应的非监督语义分割是非常必要的.针对间隙大的场景或像素易限制模型训练、降低语义分割精度的问题,通过分阶段训练和可解释蒙版消除大间隙图片和像素的干扰,提出了一种改进变换网络的域自适应语义分割网络(DA-SSN).首先,针对部分源图到目标图的域间隙大、网络模型训练困难的问题,利用训练损失阈值划分大间隙的源图数据集,提出一种分阶段的变换网络训练策略,在保证小间隙源图的语义对齐基础上,提高了大间隙源图的变换质量.然后,为了进一步缩小源图中部分像素与目标图域间间隙,提出一种可解释蒙版.通过预测每个像素在源图域和目标图域之间的间隙缩小置信度,忽略对应像素的训练损失,以消除大间隙像素对其他像素语义对齐的影响,使得模型训练只关注高置信度像素的域间隙.结果表明,所提算法相比于原始的域自适应语义分割网络的分割精度更高.与其他流行算法的结果相比,所提方法获得了更高质量的语义对齐,表明了所提方法精度高的优势.

本文引用格式

张峻宁, 苏群星, 王成, 徐超, 李一宁 . 一种改进变换网络的域自适应语义分割网络[J]. 上海交通大学学报, 2021 , 55(9) : 1158 -1168 . DOI: 10.16183/j.cnki.jsjtu.2019.307

Abstract

Due to the high cost and time-consumption of artificial semantic tags, domain-based adaptive semantics segmentation is very necessary. For scenes with large gaps or pixels, it is easy to limit model training and reduce the accuracy of semantic segmentation. In this paper, a domain adaptive semantic segmentation network (DA-SSN) using the improved transformation network is proposed by eliminating the interference of large gap pictures and pixels through staged training and interpretable masks. First, in view of the problem of large domain gaps from some source graphs to target graphs and the difficulty in network model training, the training loss threshold is used to divide the source graph dataset with large gaps, and a phased transformation network training strategy is proposed. Based on the ensurance of the semantic alignment of small gap source images, the transformation quality of large gap source images is improved. In addition, in order to further reduce the gap between some pixels in the source image and the target image area, an interpretable mask is proposed. By predicting the gap between each pixel in the source image domain and the target image domain, the confidence is reduced, and the training loss of the corresponding pixel is ignored to eliminate the influence of large gap pixels on the semantic alignment of other pixels, so that model training only focuses on the domain gap of high-confidence pixels. The results show that the proposed algorithm has a higher segmentation accuracy than the original domain adaptive semantic segmentation network. Compared with the results of other popular algorithms, the proposed method obtains a higher quality semantic alignment, which shows the advantages of the proposed method with high accuracy.

参考文献

[1] AKHAWAJI R, SEDKY M, SOLIMAN A H. Illegal parking detection using Gaussian mixture model and Kalman filter[C]// 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA). Piscataway, NJ, USA: IEEE, 2017: 840-847.
[2] GATYS L A, ECKER A S, BETHGE M. Image style transfer using convolutional neural networks[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2016: 2414-2423.
[3] LI C, WAND M. Combining Markov random fields and convolutional neural networks for image synconfproc[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2016: 2479-2486.
[4] DALAL N, TRIGGS B. Histograms of oriented gradients for human detection[C]// 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2005: 886-893.
[5] GHOSH A, BHATTACHARYA B, CHOWDHURY S B R. SAD-GAN: Synthetic autonomous driving using generative adversarial networks[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2016: 1-5.
[6] MATHIEU M, COUPRIE C, LECUN Y. Deep multi-scale video prediction beyond mean square error[J]. Statistics, 2015, 3(1):834-848.
[7] XUE Y, XU T, ZHANG H, et al. SegAN: adversarial network with multi-scale L_1 loss for medical image segmentation[J]. Neuroinformatics, 2018, 16(3/4):383-392.
[8] JONATHAN L, SHELHAMER E, DARRELL T, et al. Fully convolutional networks for semantic segmentation[J]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015: 3431-3440.
[9] LI Y S, LU Y, NUNO V. Bidirectional learning for domain adaptation of semantic segmentation[C]// Proceedings of the Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019: 6929-6938.
[10] AZADI S, FISHER M, KIM V, et al. Multi-content GAN for fewshot font style transfer [C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018.
[11] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4):834-848.
[12] NIU Z J, LIU W, ZHAO J Y, et al. DeepLab-based spatial feature extraction for hyperspectral image classification[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(2):251-255.
[13] CHEN L C, ZHU Y K, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]// Computer Vision-ECCV 2018. Amsterdam: Springer International Publishing, 2018: 1-12.
[14] ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]// 2017 IEEE International Conference on Computer Vision (ICCV). Piscataway, NJ, USA: IEEE, 2017: 2242-2251.
[15] JUDY H, TAESUNG P. Cycada: Cycle-consistent adversarial domain adaptation[C]// Proceedings of the 35th International Conference on Machine Learning (ICML). Vienna, Austria: IEEE, 2017: 1-9.
[16] HOFFMAN J, WANG D Q, YU F, et al. FCNs in the wild: Pixel-level adversarial and constraint-based adaptation[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2016: 1-10.
[17] ZHANG Y, DAVID P, GONG B Q. Curriculum domain adaptation for semantic segmentation of urban scenes[C]// 2017 IEEE International Conference on Computer Vision (ICCV). Piscataway, NJ, USA: IEEE, 2017: 2039-2049.
[18] TSAI Y H, HUNG W C, SCHULTER S, et al. Learning to adapt structured output space for semantic segmentation[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, NJ, USA: IEEE, 2018: 7472-7481.
[19] SALEH F S, ALIAKBARIAN M S, SALZMANN M, et al. Effective use of synthetic data for urban scene semantic segmentation[C]// Computer Vision-ECCV 2018. Amsterdam: Springer International Publishing, 2018: 86-103.
[20] LIU M Y, BREUEL T, KAUTZ J. Unsupervised image-to-image translation networks[C]// In Advances in Neural Information Processing Systems 2017. Long Beach, CA, USA: IEEE, 2017: 700-708.
[21] HUANG X, LIU M Y, BELONGIE S, et al. Multimodal unsupervised image-to-image translation[C]// Computer Vision-ECCV 2018. Amsterdam: Springer International Publishing, 2018: 179-196.
[22] WU Z X, HAN X T, LIN Y L, et al. DCAN: Dual channel-wise alignment networks for unsupervised scene adaptation[C]// Computer Vision-ECCV 2018. Amsterdam: Springer International Publishing, 2018: 1-12.
[23] ZHOU T H, BROWN M, SNAVELY N, et al. Unsupervised learning of depth and ego-motion from video[C]// Computer Vision and Pattern Recognition (CVPR). Honolulu, Hawaii, USA: IEEE, 2017: 6612-6619.
[24] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2016: 770-778.
[25] RICHTER S R, VINEET V, ROTH S, et al. Playing for data: Ground truth from computer games[C]// Computer Vision-ECCV 2016. Amsterdam: Springer International Publishing, 2016: 102-118.
[26] CORDTS M, OMRAN M, RAMOS S, et al. The cityscapes dataset for semantic urban scene understanding[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2016: 3213-3223.
[27] LUO Y W, ZHENG L, GUAN T, et al. Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2019: 2502-2511.
[28] ZOU Y, YU Z D, VIJAYA KUMAR B V K, et al. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training[C]// Computer Vision-ECCV 2018. Amsterdam: Springer International Publishing, 2018: 297-313.
文章导航

/