Medicine-Engineering Interdisciplinary Research

Improving Colonoscopy Polyp Detection Rate Using Semi-Supervised Learning

Expand
  • (1. School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; 2. Tong Ren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200336, China; 3. Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai 200240, China)

Received date: 2021-04-16

  Accepted date: 2021-08-02

  Online published: 2023-07-31

Abstract

Colorectal cancer is one of the biggest health threats to humans and takes thousands of lives every year.Colonoscopy is the gold standard in clinical practice to inspect the intestinal wall, detect polyps and remove polypsin early stages, preventing polyps from becoming malignant and forming colorectal cancer instances. In recentyears, computer-aided polyp detection systems have been widely used in colonoscopies to improve the qualityof colonoscopy examination and increase the polyp detection rate. Currently, the most efficient computer-aidedsystems are built with machine learning methods. However, developing such a computer-aided detection systemrequires experienced doctors to label a large number of image data from colonoscopy videos, which is extremelytime-consuming, laborious and expensive. One possible solution is to adopt a semi-supervised learning, which canbuild a detection system on a dataset where part of its data is not necessary to be labeled. In this paper, on thebasis of state-of-the-art object detection method and semi-supervised learning technique, we design and implementa semi-supervised colonoscopy polyp detection system containing four main steps: running standard supervisedtraining with all labeled data; running inference on unlabeled data to obtain pseudo labels; applying a set ofstrong augmentation to both unlabeled data and pseudo label; combining labeled data, and unlabeled data withits pseudo labels to retrain the detector. The semi-supervised learning system is evaluated both on public datasetand our original private dataset and proves its effectiveness. Also, the inference speed of the semi-supervisedlearning system can meet the requirement of real-time operation.

Cite this article

YAO Leyul (姚乐宇),HE Fan1,3 (何凡), PENG Haixia2* (彭海霞), WANG Xiaofeng2 (王晓峰),ZHOU Lu2(周璐), HUANG Xiaolin1,3* (黄晓霖) . Improving Colonoscopy Polyp Detection Rate Using Semi-Supervised Learning[J]. Journal of Shanghai Jiaotong University(Science), 2023 , 28(4) : 441 . DOI: 10.1007/s12204-022-2519-1

References

[1] MATHUR P, SATHISHKUMAR K, CHATURVEDIM, et al. Cancer statistics, 2020: Report from nationalcancer registry programme, India [J]. JCO Global Oncology, 2020, 6: 1063-1075.
[2] LEUFKENS A M, VAN OIJEN M G H, VLEGGAARF P, et al. Factors influencing the miss rate of polypsin a back-to-back colonoscopy study [J]. Endoscopy,2012, 44(5): 470-475.
[3] AHMAD O F, SOARES A S, MAZOMENOS E, et al.Artificial intelligence and computer-aided diagnosis incolonoscopy: Current evidence and future directions[J]. The Lancet Gastroenterology & Hepatology, 2019,4(1): 71-80.
[4] URBAN G, TRIPATHI P, ALKAYALI T, et al. Deeplearning localizes and identifies polyps in real time with96% accuracy in screening colonoscopy [J]. Gastroenterology, 2018, 155(4): 1069-1078.e8.
[5] BERNAL J, S′ANCHEZ F J, FERN′ANDEZESPARRACH G, et al. WM-DOVA maps for accuratepolyp highlighting in colonoscopy: Validation vs.saliency maps from physicians [J]. ComputerizedMedical Imaging and Graphics, 2015, 43: 99-111.
[6] FERN′ANDEZ-ESPARRACH G, BERNAL J,L′OPEZ-CER ′ON M, et al. Exploring the clinicalpotential of an automatic colonic polyp detectionmethod based on the creation of energy maps [J].Endoscopy, 2016, 48(9): 837-842.
[7] BERNAL J, S′ANCHEZ J, VILARI ?NO F. Towardsautomatic polyp detection with a polyp appearancemodel [J]. Pattern Recognition, 2012, 45(9): 3166-3182.
[8] SILVA J, HISTACE A, ROMAIN O, et al. Toward embedded detection of polyps in WCE images for earlydiagnosis of colorectal cancer [J]. International Journal of Computer Assisted Radiology and Surgery, 2014,9(2): 283-293.
[9] JHA D, SMEDSRUD P H, RIEGLER M A,et al. Kvasir-SEG: A segmented polyp dataset[M]//MultiMedia modeling. Cham: Springer, 2019:451-462.
[10] SOHN K, BERTHELOT D, LI C L, et al. FixMatch:Simplifying semi-supervised learning with consistencyand confidence [C]//34th Conference on Neural Information Processing Systems. Online: Committee ofNeurIPS, 2020: 1-13.
[11] VERMA V, KAWAGUCHI K, LAMB A, et al. Interpolation consistency training for semi-supervised learning[J]. Neural Networks, 2022, 145: 90-106.
[12] BERTHELOT D, CARLINI N, GOODFELLOW I, etal. MixMatch: A holistic approach to semi-supervisedlearning [C]//33rd Conference on Neural Information Processing Systems. Vancouver: Committee ofNeurIPS, 2019: 1-11.
[13] MIYATO T, MAEDA S I, KOYAMA M, et al. Virtual adversarial training: A regularization methodfor supervised and semi-supervised learning [J]. IEEETransactions on Pattern Analysis and Machine Intelligence, 2019, 41(8): 1979-1993.
[14] JEONG J, VERMA V, HYUN M, et al. Interpolationbased semi-supervised learning for object detection[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021:11597-11606.
[15] SOHN K, ZHANG Z Z, LI C L, et al. Asimple semi-supervised learning framework for object detection [DB/OL]. (2020-12-03). https://arxiv.org/abs/2005.04757.
[16] CHEN C, DONG S Y, TIAN Y, et al. Temporal selfensembling teacher for semi-supervised object detection [J]. IEEE Transactions on Multimedia, 2022, 24:3679-3692.
[17] ZHAO N, CHUA T S, LEE G H. SESS: selfensembling semi-supervised 3D object detection[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020:11076-11084.
[18] XIE Y T, ZHANG J P, XIA Y. Semi-supervised adversarial model for benign-malignant lung nodule classifi-cation on chest CT [J]. Medical Image Analysis, 2019,57: 237-248.
[19] YE D H, POHL K M, DAVATZIKOS C. Semisupervised pattern classification: Application to structural MRI of Alzheimer’s disease [C]//2011 International Workshop on Pattern Recognition in NeuroImaging. Seoul: IEEE, 2011: 1-4.
[20] GAO Y, LU W N, SI X B, et al. Deep model-basedsemi-supervised learning way for outlier detection inwireless capsule endoscopy images [J]. IEEE Access,2020, 8: 81621-81632.
[21] VAN ENGELEN J E, HOOS H H. A survey onsemi-supervised learning [J]. Machine Learning, 2020,109(2): 373-440.
[22] SINDHWANI V, KEERTHI S S. Large scale semisupervised linear SVMs [C]//29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Seattle: ACM, 2006:477-484.
[23] TARVAINEN A, VALPOLA H. Mean teachers arebetter role models: Weight-averaged consistency targets improve semi-supervised deep learning results[C]//31st Conference on Neural Information Processing Systems. Long Beach: Committee of NIPS, 2017:1-10.
[24] ZHAI X H, OLIVER A, KOLESNIKOV A, et al. S4L:Self-supervised semi-supervised learning [C]//2019IEEE/CVF International Conference on ComputerVision. Seoul: IEEE, 2019: 1476-1485.
[25] LAINE S, AILA T. Temporal ensembling for semisupervised learning [C]//5th International Conferenceon Learning Representations. Toulon: Committee ofICLR, 2017: 1-13.
[26] GOLHAR M, BOBROW T L, KHOSHKNAB M P,et al. Improving colonoscopy lesion classification usingsemi-supervised deep learning [J]. IEEE Access, 2021,9: 631-640.
[27] GUO X Q, YUAN Y X. Semi-supervised WCE imageclassification with adaptive aggregated attention [J].Medical Image Analysis, 2020, 64: 101733.
[28] ROSS T, ZIMMERER D, VEMURI A, et al. Exploiting the potential of unlabeled endoscopic video datawith self-supervised learning [J]. International Journal of Computer Assisted Radiology and Surgery, 2018,13(6): 925-933.
[29] REDMON J, FARHADI A. YOLOv3: An incremental improvement [DB/OL]. (2018-04-18).https://arxiv.org/abs/1804.02767.
[30] DENG J, DONG W, SOCHER R, et al. ImageNet:A large-scale hierarchical image database [C]//2009IEEE Conference on Computer Vision and PatternRecognition. Miami: IEEE, 2009: 248-255.
[31] REN S Q, HE K M, GIRSHICK R, et al. Faster RCNN: Towards real-time object detection with regionproposal networks [J]. IEEE Transactions on PatternAnalysis and Machine Intelligence, 2017, 39(6): 1137-1149.
[32] GIRSHICK R, DONAHUE J, DARRELL T, et al.Rich feature hierarchies for accurate object detectionand semantic segmentation [C]//2014 IEEE Conference on Computer Vision and Pattern Recognition.Columbus: IEEE, 2014: 580-587.
Outlines

/