Journal of Shanghai Jiaotong University >
A Domain Adaptive Semantic Segmentation Network Based on Improved Transformation Network
Received date: 2019-10-23
Online published: 2021-10-08
Due to the high cost and time-consumption of artificial semantic tags, domain-based adaptive semantics segmentation is very necessary. For scenes with large gaps or pixels, it is easy to limit model training and reduce the accuracy of semantic segmentation. In this paper, a domain adaptive semantic segmentation network (DA-SSN) using the improved transformation network is proposed by eliminating the interference of large gap pictures and pixels through staged training and interpretable masks. First, in view of the problem of large domain gaps from some source graphs to target graphs and the difficulty in network model training, the training loss threshold is used to divide the source graph dataset with large gaps, and a phased transformation network training strategy is proposed. Based on the ensurance of the semantic alignment of small gap source images, the transformation quality of large gap source images is improved. In addition, in order to further reduce the gap between some pixels in the source image and the target image area, an interpretable mask is proposed. By predicting the gap between each pixel in the source image domain and the target image domain, the confidence is reduced, and the training loss of the corresponding pixel is ignored to eliminate the influence of large gap pixels on the semantic alignment of other pixels, so that model training only focuses on the domain gap of high-confidence pixels. The results show that the proposed algorithm has a higher segmentation accuracy than the original domain adaptive semantic segmentation network. Compared with the results of other popular algorithms, the proposed method obtains a higher quality semantic alignment, which shows the advantages of the proposed method with high accuracy.
ZHANG Junning, SU Qunxing, WANG Cheng, XU Chao, LI Yining . A Domain Adaptive Semantic Segmentation Network Based on Improved Transformation Network[J]. Journal of Shanghai Jiaotong University, 2021 , 55(9) : 1158 -1168 . DOI: 10.16183/j.cnki.jsjtu.2019.307
[1] | AKHAWAJI R, SEDKY M, SOLIMAN A H. Illegal parking detection using Gaussian mixture model and Kalman filter[C]// 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA). Piscataway, NJ, USA: IEEE, 2017: 840-847. |
[2] | GATYS L A, ECKER A S, BETHGE M. Image style transfer using convolutional neural networks[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2016: 2414-2423. |
[3] | LI C, WAND M. Combining Markov random fields and convolutional neural networks for image synconfproc[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2016: 2479-2486. |
[4] | DALAL N, TRIGGS B. Histograms of oriented gradients for human detection[C]// 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2005: 886-893. |
[5] | GHOSH A, BHATTACHARYA B, CHOWDHURY S B R. SAD-GAN: Synthetic autonomous driving using generative adversarial networks[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2016: 1-5. |
[6] | MATHIEU M, COUPRIE C, LECUN Y. Deep multi-scale video prediction beyond mean square error[J]. Statistics, 2015, 3(1):834-848. |
[7] | XUE Y, XU T, ZHANG H, et al. SegAN: adversarial network with multi-scale L_1 loss for medical image segmentation[J]. Neuroinformatics, 2018, 16(3/4):383-392. |
[8] | JONATHAN L, SHELHAMER E, DARRELL T, et al. Fully convolutional networks for semantic segmentation[J]. 2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015: 3431-3440. |
[9] | LI Y S, LU Y, NUNO V. Bidirectional learning for domain adaptation of semantic segmentation[C]// Proceedings of the Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019: 6929-6938. |
[10] | AZADI S, FISHER M, KIM V, et al. Multi-content GAN for fewshot font style transfer [C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018. |
[11] | CHEN L C, PAPANDREOU G, KOKKINOS I, et al. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4):834-848. |
[12] | NIU Z J, LIU W, ZHAO J Y, et al. DeepLab-based spatial feature extraction for hyperspectral image classification[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(2):251-255. |
[13] | CHEN L C, ZHU Y K, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]// Computer Vision-ECCV 2018. Amsterdam: Springer International Publishing, 2018: 1-12. |
[14] | ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]// 2017 IEEE International Conference on Computer Vision (ICCV). Piscataway, NJ, USA: IEEE, 2017: 2242-2251. |
[15] | JUDY H, TAESUNG P. Cycada: Cycle-consistent adversarial domain adaptation[C]// Proceedings of the 35th International Conference on Machine Learning (ICML). Vienna, Austria: IEEE, 2017: 1-9. |
[16] | HOFFMAN J, WANG D Q, YU F, et al. FCNs in the wild: Pixel-level adversarial and constraint-based adaptation[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2016: 1-10. |
[17] | ZHANG Y, DAVID P, GONG B Q. Curriculum domain adaptation for semantic segmentation of urban scenes[C]// 2017 IEEE International Conference on Computer Vision (ICCV). Piscataway, NJ, USA: IEEE, 2017: 2039-2049. |
[18] | TSAI Y H, HUNG W C, SCHULTER S, et al. Learning to adapt structured output space for semantic segmentation[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, NJ, USA: IEEE, 2018: 7472-7481. |
[19] | SALEH F S, ALIAKBARIAN M S, SALZMANN M, et al. Effective use of synthetic data for urban scene semantic segmentation[C]// Computer Vision-ECCV 2018. Amsterdam: Springer International Publishing, 2018: 86-103. |
[20] | LIU M Y, BREUEL T, KAUTZ J. Unsupervised image-to-image translation networks[C]// In Advances in Neural Information Processing Systems 2017. Long Beach, CA, USA: IEEE, 2017: 700-708. |
[21] | HUANG X, LIU M Y, BELONGIE S, et al. Multimodal unsupervised image-to-image translation[C]// Computer Vision-ECCV 2018. Amsterdam: Springer International Publishing, 2018: 179-196. |
[22] | WU Z X, HAN X T, LIN Y L, et al. DCAN: Dual channel-wise alignment networks for unsupervised scene adaptation[C]// Computer Vision-ECCV 2018. Amsterdam: Springer International Publishing, 2018: 1-12. |
[23] | ZHOU T H, BROWN M, SNAVELY N, et al. Unsupervised learning of depth and ego-motion from video[C]// Computer Vision and Pattern Recognition (CVPR). Honolulu, Hawaii, USA: IEEE, 2017: 6612-6619. |
[24] | HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2016: 770-778. |
[25] | RICHTER S R, VINEET V, ROTH S, et al. Playing for data: Ground truth from computer games[C]// Computer Vision-ECCV 2016. Amsterdam: Springer International Publishing, 2016: 102-118. |
[26] | CORDTS M, OMRAN M, RAMOS S, et al. The cityscapes dataset for semantic urban scene understanding[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2016: 3213-3223. |
[27] | LUO Y W, ZHENG L, GUAN T, et al. Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ, USA: IEEE, 2019: 2502-2511. |
[28] | ZOU Y, YU Z D, VIJAYA KUMAR B V K, et al. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training[C]// Computer Vision-ECCV 2018. Amsterdam: Springer International Publishing, 2018: 297-313. |
/
〈 |
|
〉 |