Biomedical Engineering

Design of Mandibular Angle Osteotomy Plane Based on Point Cloud Semantic Segmentation Algorithm

Expand
  • 1. College of Mechanical Engineering, Donghua University, Shanghai 201620, China
    2. Department of Plastic and Reconstructive Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
    3. Institute of Forming Technology and Equipment, Shanghai Jiao Tong University, Shanghai 200030, China

Received date: 2021-04-05

  Online published: 2022-08-02

Abstract

Mandibular angle osteotomy is a popular craniofacial plastic surgery in recent years. Usually, preoperative planning of mandibular angle osteotomy is completed by an experienced doctor, which is cumbersome and time-consuming. In order to improve the efficiency of osteotomy planning, a design method of mandibular angle osteotomy plane based on point cloud semantic segmentation network is proposed. After three-dimensional reconstruction of the skull computer tomography (CT) scan data, the three-dimensional model of the mandible is converted into point cloud data through uniform sampling. The resection area of the mandible is predicted by the proposed algorithm, which is used to calculate the mandibular angle osteotomy plane. The proposed semantic segmentation network mainly includes 2 parts: a local feature extraction layer based on the attention mechanism, which is used to extract fine-grained local structure information, and a non-local feature extraction layer based on Transformer, which is used to extract the global context information of the point cloud. On the constructed mandible semantic segmentation data set, the proposed algorithm is compared with other point cloud semantic segmentation algorithms. The results show that the proposed algorithm can achieve the best prediction of the mandibular angle resection area, which is better than current common point cloud semantic segmentation algorithms.

Cite this article

LÜ Chaofan, YAN Yingjie, LIN Li, CHAI Gang, BAO Jinsong . Design of Mandibular Angle Osteotomy Plane Based on Point Cloud Semantic Segmentation Algorithm[J]. Journal of Shanghai Jiaotong University, 2022 , 56(11) : 1509 -1517 . DOI: 10.16183/j.cnki.jsjtu.2021.103

References

[1] ZHANG C, MA M W, XU J J, et al. Application of the 3D digital ostectomy template (DOT) in mandibular angle ostectomy (MAO)[J]. Journal of Cranio-Maxillofacial Surgery, 2018, 46(10): 1821-1827.
[2] MENG H Y, GAO L, LAI Y K, et al. VV-net: Voxel VAE net with group convolutions for point cloud segmentation[C]∥2019 IEEE/CVF International Conference on Computer Vision. Seoul, Korea: IEEE, 2019: 8499-8507.
[3] WANG Z J, LU F. VoxSegNet: Volumetric CNNs for semantic part segmentation of 3D shapes[J]. IEEE Transactions on Visualization and Computer Graphics, 2020, 26(9): 2919-2930.
[4] LE T, BUI G, DUAN Y. A multi-view recurrent neural network for 3D mesh segmentation[J]. Computers & Graphics, 2017, 66: 103-112.
[5] KUNDU A, YIN X Q, FATHI A, et al. Virtual multi-view fusion for 3D semantic segmentation[C]∥Computer Vision-ECCV 2020. Glasgow, UK: Springer, 2020: 518-535.
[6] CHARLES R Q, HAO S, MO K C, et al. PointNet: Deep learning on point sets for 3D classification and segmentation[C]∥2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017: 77-85.
[7] CHARLES R Q, LI Y, HAO S, et al. PointNet++: Deep hierarchical feature learning on point sets in a metric space[C]∥Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, California, USA: NIPS, 2017: 5099-5108.
[8] WANG Y, SUN Y B, LIU Z W, et al. Dynamic graph CNN for learning on point clouds[J]. ACM Transactions on Graphics, 2019, 38(5): 1-12.
[9] LIU Y C, FAN B, XIANG S M, et al. Relation-shape convolutional neural network for point cloud analysis[C]∥2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019: 8887-8896.
[10] WANG L, HUANG Y C, HOU Y L, et al. Graph attention convolution for point cloud semantic segmentation[C]∥2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019: 10288-10297.
[11] HU Q Y, YANG B, XIE L H, et al. RandLA-net: Efficient semantic segmentation of large-scale point clouds[C]∥2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA: IEEE, 2020: 11105-11114.
[12] GUO M H, CAI J X, LIU Z N, et al. PCT: Point cloud transformer[J]. Computational Visual Media, 2021, 7(2): 187-199.
[13] 赵沁园, 刘磊, 章一新, 等. 手术导板应用于下颌骨精确截骨的前瞻性随机对照研究[J]. 中国美容整形外科杂志, 2018, 29(9): 524-526.
[13] ZHAO Qinyuan, LIU Lei, ZHANG Yixin, et al. The accuracy of a surgical template for mandibular angle osteotomy: A prospective randomized controlled trial[J]. Chinese Journal of Aesthetic and Plastic Surgery, 2018, 29(9): 524-526.
[14] LI J X, CHEN B M, LEE G H. SO-net: Self-organizing network for point cloud analysis[C]∥2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018: 9397-9406.
[15] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Advances in Neural Information Processing Systems, 2017, 30: 5998-6008.
[16] LEE J, YOON W, KIM S, et al. BioBERT: A pre-trained biomedical language representation model for biomedical text mining[J]. Bioinformatics, 2019, 36(4): 1234-1240.
[17] DONG L H, XU S, XU B. Speech-transformer: A no-recurrence sequence-to-sequence model for speech recognition[C]∥2018 IEEE International Conference on Acoustics, Speech and Signal Processing. Calgary, AB, Canada: IEEE, 2018: 5884-5888.
[18] SHAW P, USZKOREIT J, VASWANI A. Self-attention with relative position representations[C]∥Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018: 464-468.
[19] MAO X Y, FU X, NIU F, et al. Three-dimensional analysis of mandibular angle classification and aesthetic evaluation of the lower face in Chinese female adults[J]. Annals of Plastic Surgery, 2018, 81(1): 12-17.
Outlines

/