search for




 

Temporomandibular Joint Segmentation Using Deep Learning for Automated Three-Dimensional Reconstruction
J Oral Med Pain 2024;49:109-117
Published online December 30, 2024;  https://doi.org/10.14476/jomp.2024.49.4.109
© 2024 Korean Academy of Orofacial Pain and Oral Medicine

Young-Tae Choi1│Ho-Jun Song2│Jae-Seo Lee3│Yeong-Gwan Im4,5

1Chonnam National University School of Dentistry, Gwangju, Korea
2Department of Dental Materials, Dental Science Research Institute, Chonnam National University School of Dentistry, Gwangju, Korea
3Department of Oral and Maxillofacial Radiology, Dental Science Research Institute, Chonnam National University School of Dentistry, Gwangju, Korea
4Department of Oral Medicine, Dental Science Research Institute, Chonnam National University School of Dentistry, Gwangju, Korea
5Department of Oral Medicine, Chonnam National University Dental Hospital, Gwangju, Korea
Correspondence to: Yeong-Gwan Im
Department of Oral Medicine, Dental Science Research Institute, Chonnam National University School of Dentistry, 77 Yongbong-ro, Buk-gu, Gwangju 61186, Korea
E-mail: imygwise@jnu.ac.kr
https://orcid.org/0000-0003-2703-1475
Received November 13, 2024; Revised December 3, 2024; Accepted December 4, 2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Purpose: Cone beam computed tomography (CBCT) is widely used to evaluate the temporomandibular joint (TMJ). For the three-dimensional (3D) assessment of the TMJ, segmentation of the mandibular condyle and articular fossa is essential. This study aimed to perform deep learning-based 3D segmentation of the mandibular condyle on CBCT images and evaluate the performance of the segmentation.
Methods: CBCT scan data from 99 patients (mean age: 53.3±19.2 years) diagnosed with TMJ disorders were analyzed. From the CBCT images, sagittal, coronal, and axial planes showing the mandibular condyle were selected and combined to form two-dimensional (2D) images. The U-Net deep learning model was used to exclusively segment the mandibular condyle area from the 2D images. From these results, 3D images of the mandibular condyle were reconstructed. Accuracy, precision, recall, and the Dice coefficient were calculated to appraise segmentation performance in each plane.
Results: The average Dice coefficient was 0.92 for the coronal and axial planes and 0.82 for the sagittal plane. The CBCT image-based segmentation performance of the mandibular condyle in the coronal and axial planes exceeded that in the sagittal plane. The sharpness and uniformity of the 2D images affected segmentation performance, with segmentation errors more likely occurring in non-uniform images. Certain segmentation errors were corrected through software processing. Finally, the segmented mandibular condyle images were applied to the CBCT data to reconstruct a 3D TMJ model.
Conclusions: Mandibular condyle 3D segmentation on CBCT images using U-Net may help evaluate and diagnose TMJ disorders. The proposed segmentation method may assist clinicians in efficiently analyzing CBCT images, particularly in cases involving anatomical abnormalities.
Keywords : Cone beam computed tomography; Deep learning; Segmentation; Temporomandibular joint; U-Net
INTRODUCTION

Temporomandibular disorder (TMD) is a musculoskeletal condition that causes pain and dysfunction in the masticatory system. Among TMDs, temporomandibular joint disorder (TMJD), a disease affecting the temporomandibular joint (TMJ), includes arthralgia, disc-condyle complex disorders, luxation, degenerative joint disease (DJD), and synovial chondromatosis [1]. Reportedly, the prevalence of TMJD is approximately 31% in adults and older individuals and 11% in children and adolescents, while that of DJD in adults and older individuals is 9.8% [2]. Another study reported a wide variation in the prevalence of DJD, ranging from 18% to 85% [3].

Various imaging techniques are used to evaluate TMJ status. Conventional radiography, computed tomography, and cone beam computed tomography (CBCT) are useful for assessing bony structures and changes. To evaluate the position and morphology of the articular disc, arthrography and magnetic resonance imaging are helpful. Among these, CBCT is considered the most valuable and reliable tool for assessing bony changes in the TMJ [4]. CBCT offers multiple advantages, such as less radiation exposure as well as its ability to evaluate TMJ bony structures in the sagittal, coronal, and axial planes and perform three-dimensional (3D) reconstruction using relevant software [5].

Three-dimensional imaging of the TMJ provides critical information for diagnosing and assessing hard tissue as well as for the quantitative analysis of tissue changes in angle and distance. Fused 3D CBCT images improve diagnostic accuracy by providing information on the progression of bony lesions and evaluating trabecular bone structure, enabling bone-change monitoring [6]. However, manual segmentation, which is currently considered the reference standard for 3D TMJ segmentation, possesses the disadvantages of being both time-consuming and labor-intensive [7].

Recently, studies have increasingly utilized deep learning to perform 3D TMJ segmentation based on CBCT data. Eşer et al. [8] used deep learning to segment the TMJ in the sagittal plane on CBCT images and achieved remarkable accuracy in classifying osteoarthritis. Ma et al. [9] applied semi-automatic segmentation to CBCT images to evaluate positional mandibular condyle changes after orthognathic surgery in patients with mandibular prognathism. They found that the condylar position stabilized 3 months postoperatively but did not return to its preoperative state within a year. Vinayahalingam et al. [7] segmented CBCT images, trained an algorithm, and investigated artificial intelligence-based automatic segmentation using 3D U-Net, confirming exceptional accuracy, speed, and reliability.

Deep learning has proven highly effective in medical image segmentation, yielding faster and more efficient results with greater accuracy than traditional methods. Most previous studies on CBCT data-based 3D segmentation have employed a method wherein independent two-dimensional (2D) images from each axis are segmented using deep learning models and subsequently reconstructed into 3D images [7,8,10,11]. In contrast, this study proposes a method that merges multiple 2D images from three axes of CBCT data into a single unified image, which is subsequently segmented using a U-Net deep learning model to create 3D reconstructions. Additionally, it aimed to appraise the performance of this deep learning approach.

MATERIALS AND METHODS

1. Ethical Approval and Research Process Workflow

Ethical approval was obtained from the Institutional Review Board of Chonnam National University Dental Hospital (CNUDH-2024-010). The written informed consent was waived due to the study’s retrospective design. All experimental procedures conformed with the latest revision of the Declaration of Helsinki. Fig. 1 presents a flowchart illustrating the overall workflow, encompassing data acquisition, preprocessing, 2D image generation, U-Net model training, and 3D reconstruction.

2. Study Participant Selection and Imaging Data Collection

This study utilized 99 CBCT scans from 99 patients (21 men, 78 women; mean age: 53.3±19.2 years) who had undergone TMD evaluation at the Department of Oral Medicine, Chonnam National University Dental Hospital, between 2013 and 2017. According to radiological examinations by board-certified oral and maxillofacial radiologists, 78 of the 99 scans revealed bony changes in the mandibular condyle, while 21 were normal, exhibiting no bony changes. Diagnoses of cases with bony changes included osteoarthritis and osteochondromatosis.

The CBCT scans were acquired using a CS 9300 scanner (Carestream). The CBCT scan parameters were as follows: tube voltage, 90 kV; tube current, 10 mA; scan time, 8 s; voxel size, 0.18 mm; field of view, 8×8 cm; and Digital Imaging and Communications in Medicine (DICOM) image number, 512.

3. Two-Dimensional Image Processing of Three-Dimensional CBCT Data and Segmentation of the Mandibular Condyle in the TMJ

CBCT uses cone-shaped X-rays to capture 2D data, which are detected by a flat panel detector after passing through the object. These 2D data are subsequently processed using mathematical algorithms to generate 3D volumetric information, and hundreds of 2D images are extracted in DICOM format. These images can be reconstructed into 3D and diverse sectional views using image processing software. The sectional images are classified based on the axis direction into sagittal, coronal, and axial planes [12].

In this study, TMJ CBCT images were displayed as 2D sagittal, coronal, and axial plane images from the multi-planar reconstruction. However, a single 2D image is insufficient to accurately visualize the 3D structure of the TMJ. As shown in Fig. 2, although overlapping the 2D images in the sagittal, coronal, and axial planes avails a general view from one direction, observing the exact 3D shape remains challenging.

In the 3D image of the TMJ created using 3D mapping software (Fig. 3A), the mandibular condyle is not clearly visible owing to overlapping with the articular eminence. However, by selecting a specific region of the DICOM data in the sagittal plane, the 3D image (Fig. 3B) allows partial observation of the mandibular condyle that had previously been obscured by the articular eminence. The overlapping articular eminence data were subsequently removed, and the 3D image was reverted to a 2D image (Fig. 3C).

Using this method, the 3D image of the selected region was reverted to a 2D image, and the mandibular condyle region was segmented from the 2D image (Fig. 3D). After completing mandibular condyle segmentation in the sagittal plane, data from regions other than the extracted mandibular condyle were set to zero to remove both soft and hard tissue information. Resultantly, a complete 3D image of the mandibular condyle in the sagittal direction was obtained (Fig. 3E). However, upon observing this 3D image from the coronal plane, unnecessary information outside the mandibular condyle remained visible (Fig. 3F). To address this problem, the coronal plane was reconverted into a 2D image and the mandibular condyle re-segmented (Fig. 3H). After removing the superfluous regions from the DICOM data, a more accurate 3D image of the mandibular condyle was constructed (Fig. 3I). The same method was applied to the axial plane, where redundant regions outside the mandibular condyle were segmented and removed (Fig. 3K), resulting in a final 3D image in which only the mandibular condyle was precisely segmented, with unnecessary regions removed from all three planes (Fig. 3L).

4. Labeling and Training for U-Net Deep Learning Model Application

The CBCT data comprised 512 2D axial images, which were reconstructed into the sagittal, coronal, and axial planes. The mandibular condyle was segmented in each direction, and information from the remaining regions was removed from the DICOM files to create 3D images. However, manually segmenting the mandibular condyle from 2D images in all three planes is time-consuming; therefore, we endeavored to automate this process using a deep learning model.

U-Net, a deep learning model for segmenting regions from 2D images, features a U-shaped structure with an encoder and decoder. It uses skip connections to link corresponding encoder and decoder stages, preserving high-resolution information. Pooling layers capture multi-scale features, while upsampling restores the original image size, enabling the model to learn features at different scales. U-Net is known for delivering accurate image segmentation performance, even with a few training datasets [13].

As shown in Fig. 4, a labeling process is required to apply U-Net. In this study, three 2D images from the sagittal, coronal, and axial planes were extracted from each of the 99 DICOM files. Labeling was performed by a fourth-year dental student with professional training using dedicated software developed for this task by one of the authors. Data from 99 patients were used, with 79 images for training, 10 for validation, and 10 for testing. Horizontal and vertical image flipping was employed as part of the data augmentation process to address the limited training data and improve model generalizability.

5. Evaluation

To evaluate the segmentation results using the U-Net model, true positive (TP), false positive (FP), true negative (TN), and false negative (FN) images were defined based on ground-truth and predicted images at the pixel level. Here, TP refers to the overlapping area between the actual and predicted regions, FP refers to the area predicted but not present in the actual region, and FN refers to the actual region that was not predicted. The Dice coefficient is a metric used to measure similarity between two sets, particularly in image segmentation, to determine the overlap between the predicted and actual regions (ground truth). The Dice coefficient ranges from 0 to 1, where a value closer to 1 indicates a higher degree of overlap, indicating that the two regions match perfectly. Based on these definitions, accuracy, precision, recall, and the Dice coefficient were calculated using the following formulas:

Accuracy=TP+TNTP+TN+FP+FNPrecision=TPTP+FPRecall=TPTP+FNDice coefficient=2TP2TP+FP+FN

RESULTS

Table 1 shows the performance evaluation of the U-Net segmentation results compared to the ground truth values. The average accuracies of the sagittal, coronal, and axial planes were 0.58, 0.65, and 0.73, respectively, with the sagittal plane exhibiting the lowest accuracy. Similarly, the accuracy, recall, and Dice coefficient values of the sagittal images were lower than those of the coronal or axial images.

The Dice coefficient, which best reflects the similarity between the ground-truth and predicted images, was 0.83, 0.92, and 0.92 for the sagittal, coronal, and axial planes, respectively, indicating superior segmentation performance for the coronal and axial images.

Fig. 5 presents the TMJ segmentation results from the U-Net model on the labeled data from the sagittal, coronal, and axial planes depicted in Fig. 4. Fig. 5A, C, E depict cases where segmentation was correctly performed, whereas Fig. 5B, D, F highlight instances where some errors occurred. These errors typify cases where, as shown in a of Fig. 5B, extra regions were independently segmented; as shown in b of Fig. 5D, the region of interest was omitted; or as shown in c of Fig. 5F, certain regions were incorrectly included in the region of interest.

As illustrated in Fig. 5A, segmentation was error-free in images with clear and uniform quality; however, certain errors occurred in non-uniform images. For example, although Fig. 5A, B display similar TMJ structures, differences in overall image clarity and uniformity caused segmentation errors in the latter. In Fig. 5D, the uneven bone density emanating from osteoporotic changes within the mandibular condyle resulted in error b, where part of the region had been omitted.

Fig. 6 displays examples of 3D mandibular condyle images segmented using the U-Net deep learning model. The shape of each mandibular condyle, segmented in 3D, accurately reflects the pre-existing diagnosis. In particular, in Fig. 6C, some areas are not completely separated because the distinction between the condyle and articular fossa was unclear, and in Fig. 6D, the model accurately captured the incomplete segmentation induced by erosive bone changes.

DISCUSSION

In this study, 2D images were constructed by overlapping specific regions from the sagittal, coronal, and axial planes of TMJ CBCT data, and the U-Net model was used to segment the mandibular condyle. Unlike previous methods that rely on the independent segmentation of 2D slices and subsequent reconstruction into 3D models [7,8,10,11], this method directly integrates 2D data, resulting in a more seamless and efficient 3D reconstruction process. Furthermore, this approach was particularly effective for cases involving complex anatomical variations or abnormalities in the mandibular condyle, as evidenced by the results. U-Net was selected for its proven effectiveness in biomedical image segmentation, specifically when working with small datasets [13]. Its architecture, featuring skip connections and multi-scale learning, ensures high-resolution feature preservation and enhances segmentation accuracy, even with limited training data [14]. The deep learning model for segmenting the sagittal, coronal, and axial planes of the TMJ yielded accuracy, precision, recall, and Dice coefficient values of 0.58-0.73, 0.82-0.91, 0.86-0.98, and 0.83-0.92, respectively.

Vinayahalingam et al. [7] employed a 3D U-Net-based, three-step deep learning method for TMJ segmentation, reporting accuracy, precision, recall, and Dice coefficient values of 0.995, 0.978, 0.975, and 0.976, respectively, for the mandibular condyle. Eşer et al. [8] used the You Only Look Once (YOLO) v5 model for TMJ segmentation, achieving accuracy, precision, recall, and F1 score values of 0.9953, 0.9953, 1, and 0.9976, respectively. Le et al. [10] utilized a novel algorithm, MandSeg, to segment the mandibular condyle and ramus, reporting accuracy, recall, specificity, and F1 score values of 0.9996, 0.93, 0.9998, and 0.91, respectively. The F1 score is a metric representing the balance between precision and recall.

Several factors hypothetically explain the low performance in this study compared with that in other studies. First, data quantity and quality: the performance of deep learning models largely relies on the amount and quality of the training data. The CBCT data used in this study might have been limited in quantity or could have had lower resolution and quality. Since errors occurred in non-uniform images, differences in data preprocessing or image quality might have affected the results. Second, image complexity: in this study, the sagittal images exhibited more complex shapes than the other planes, potentially contributing to inferior segmentation performance. Third, model differences: while this study used the U-Net model, other studies have used more specialized algorithms, such as 3D U-Net, YOLOv5, and MandSeg, which have displayed superior performance. Each model is optimized for specific purposes, and 3D models, in particular, can better capture 3D characteristics. Although U-Net performs well with 2D images, it may have limitations when applied to 3D models. Finally, the data preprocessing process and labeling quality also significantly affect performance. Incomplete or inconsistent labeling might have caused the model to learn incorrect patterns, leading to degraded performance.

In this study, the sagittal plane yielded a lower average accuracy than the coronal and axial planes, generating lower accuracy, recall, and Dice coefficient values than the coronal and axial images. The Dice coefficient, which effectively reflects similarities between ground-truth and predicted images, demonstrated superior segmentation performance in the coronal and axial planes. This result can be explained by the fact that sagittal images exhibit more complex shapes than coronal and axial images. More data are required to improve segmentation performance in the sagittal plane. One way to achieve this is to increase the training data by overlapping images from the same patient in diverse ways.

This study was based on 99 CBCT scans from 99 patients. In comparison, Vinayahalingam et al. [7] used 162 CBCT scans from 81 patients, Le et al. [10] used 109 head scans from 109 patients, and Eşer et al. [8] used 2,000 CBCT images from 290 patients, employing considerably larger datasets to train their models. This difference in data quantity may be one of the key factors explaining the performance differences between this study and others. Future studies should gather larger quantities of CBCT data and use them for training to improve segmentation performance. An increase in data volume is expected to reduce errors, particularly in complex structures such as the sagittal plane, thereby improving overall performance.

The deep learning-based segmentation results revealed that images with clear and uniform quality were segmented without errors, whereas non-uniform images exhibited some errors. The greater the difference in image clarity and uniformity, the larger the likelihood of errors. In non-uniform images, such as those caused by variations in bone density within the mandibular condyle, segmentation errors, such as missing regions, occurred. To mitigate such errors, selecting the most uniform images when choosing overlapping regions from DICOM files is imperative. Additionally, when unnecessary regions outside the mandibular condyle are segmented in the predicted images, the model can be corrected by using a program that exclusively retains the relevant parts of the mandibular condyle and removes extraneous areas. However, areas where erosion has occurred in the mandibular condyle will require additional editing to restore the region, and areas of the articular fossa that have erroneously been included owing to contact with the mandibular condyle will need to be manually removed during the editing process.

Previous 3D extraction studies have exclusively segmented and labeled the mandibular condyle from individual DICOM files obtained in the axial plane and subsequently trained the model based on these labels [6,7,10,15]. During this process, hundreds of labeled images are required for each patient, and processing data from 100 patients potentially requires more than 10,000 labeled images. Based on these trained datasets, 2D mandibular condyle images from each patient’s DICOM data are stacked to construct the final 3D image. In normal patients, the cross-section of the mandibular condyle typically assumes an oval shape, facilitating smooth 3D extraction. However, in cases where severe bony changes, such as erosion, have occurred on the articular surface of the mandibular condyle, accurate labeling can be challenging. This indicates that accurately extracting the mandibular condyle is difficult in cases of abnormal bone structure. In contrast, this study stacked multiple 2D images to construct a more accurate representation of the articular surface before performing the labeling process, enabling accurate surface identification, even in cases of abnormally deformed mandibular condyles.

In conclusion, this study demonstrates that 3D segmentation can be performed with relative accuracy in both normal and abnormally deformed mandibular condyles. The acquisition of more high-quality CBCT data for further training is expected to inspire the development of faster and more accurate software for mandibular condyle segmentation in the future. This segmentation method potentially enhances diagnostic accuracy and efficiency in analyzing CBCT images of TMJDs, especially in cases involving complex anatomical abnormalities.

CONFLICTS OF INTEREST

No potential conflict of interest relevant to this article was reported.

DATA AVAILABILITY STATEMENT

Due to the sensitive nature of the research, supporting data is not available for public access.

FUNDING

None.

AUTHOR CONTRIBUTIONS

Conceptualization: HJS, YGI. Data curation: HJS, JSL. Formal analysis: YTC, HJS, YGI. Methodology: YTC, HJS. Project administration: HJS, YGI. Visualization: HJS. Writing - original draft: YTC, HJS, JSL, YGI. Writing - review & editing: HJS, JSL, YGI.

Figures
Fig. 1. Research process workflow. CBCT, cone beam computed tomography; DICOM, Digital Imaging and Communications in Medicine; 2D, two-dimensional; 3D, three-dimensional.
Fig. 2. Two-dimensional (2D) images of each temporomandibular disorder cone beam computed tomography section and images reconstructed by overlapping the 2D images.
Fig. 3. Process of exclusively segmenting the mandibular condyle area from cone beam computed tomography (CBCT) data. (A) Three-dimensional (3D) CBCT image of the hard tissue, (B) 3D image created by selecting a specific region in the sagittal direction, (C) two-dimensional (2D) image of (B), (D) image of (C) with only the mandibular condyle region segmented, (E) 3D image of (D), (F) coronal 3D image, (G) 2D image of (F), (H) exclusive segmentation of the mandibular condyle region in (G), (I) 3D image observed in the coronal plane, (J) 2D image of (I), (K) exclusive image of the mandibular condyle segmented in (I), and (L) final 3D image of the mandibular condyle segmented in all three planes.
Fig. 4. Two-dimensional (2D) images constructed from cone beam computed tomography and labeled images exclusively segmented from the mandibular condyle.
Fig. 5. (A-F) Example of U-Net-based temporomandibular joint segmentation from overlapped two-dimensional sagittal, coronal, and axial images. Arrows a, b, and c indicate incorrectly segmented areas.
Fig. 6. Example of three-dimensional temporomandibular joint (TMJ) images segmented from Digital Imaging and Communications in Medicine files. (A) Normal condyle, (B) osteoarthritis and hypoplasia of the mandibular condyle: the condyle’s articular surface is rough and flat, (C) unclear separation of the mandibular condyle from the articular fossa owing to a narrowed TMJ space, (D) osteoarthritis of the mandibular condyle: erosive bony changes on the articular surface.
Tables

Evaluation of U-Net model-based test image segmentation results

Sagittal Coronal Axial



Mean±SD Mean±SD Mean±SD
Accuracy 0.58±0.10 0.65±0.07 0.73±0.04
Precision 0.82±0.07 0.91±0.05 0.87±0.07
Recall 0.86±0.13 0.93±0.08 0.98±0.02
Dice coefficient 0.83±0.08 0.92±0.05 0.92±0.04

SD, standard deviation.

Values are presented as mean±standard deviation.

References
  1. Klasser GD, Reyes MR. Orofacial pain: guidelines for assessment, diagnosis, and management. 7th ed. Quintessence Publishing Co., Inc.; 2023.
  2. Valesan LF, Da-Cas CD, Réus JC, et al. Prevalence of temporomandibular joint disorders: a systematic review and meta-analysis. Clin Oral Investig 2021;25:441-453.
    Pubmed CrossRef
  3. Pantoja LLQ, de Toledo IP, Pupo YM, et al. Prevalence of degenerative joint disease of the temporomandibular joint: a systematic review. Clin Oral Investig 2019;23:2475-2488.
    Pubmed CrossRef
  4. Ahmad M, Hollender L, Anderson Q, et al. Research diagnostic criteria for temporomandibular disorders (RDC/TMD): development of image analysis criteria and examiner reliability for image analysis. Oral Surg Oral Med Oral Pathol Oral Radiol Endod 2009;107:844-860.
    Pubmed KoreaMed CrossRef
  5. Tsai CM, Wu FY, Chai JW, Chen MH, Kao CT. The advantage of cone-beam computerized tomography over panoramic radiography and temporomandibular joint quadruple radiography in assessing temporomandibular joint osseous degenerative changes. J Dent Sci 2020;15:153-162.
    Pubmed KoreaMed CrossRef
  6. Zhou Y, Li JP, Lv WC, Ma RH, Li G. Three-dimensional CBCT images registration method for TMJ based on reconstructed condyle and skull base. Dentomaxillofac Radiol 2018;47:20170421.
    Pubmed KoreaMed CrossRef
  7. Vinayahalingam S, Berends B, Baan F, et al. Deep learning for automated segmentation of the temporomandibular joint. J Dent 2023;132:104475.
    Pubmed CrossRef
  8. Eşer G, Duman ŞB, Bayrakdar İŞ, Çelik Ö. Classification of temporomandibular joint osteoarthritis on cone beam computed tomography images using artificial intelligence system. J Oral Rehabil 2023;50:758-766.
    Pubmed CrossRef
  9. Ma RH, Li G, Yin S, Sun Y, Li ZL, Ma XC. Quantitative assessment of condyle positional changes before and after orthognathic surgery based on fused 3D images from cone beam computed tomography. Clin Oral Investig 2020;24:2663-2672.
    Pubmed CrossRef
  10. Le C, Deleat-Besson R, Prieto J, et al. Automatic segmentation of mandibular ramus and condyles. Annu Int Conf IEEE Eng Med Biol Soc 2021;2021:2952-2955.
    Pubmed KoreaMed CrossRef
  11. Brosset S, Dumont M, Bianchi J, et al. 3D Auto-segmentation of mandibular condyles. Annu Int Conf IEEE Eng Med Biol Soc 2020;2020:1270-1273.
    Pubmed KoreaMed CrossRef
  12. Scarfe WC, Farman AG, Sukovic P. Clinical applications of cone-beam computed tomography in dental practice. J Can Dent Assoc 2006;72:75-80.
  13. Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi A, eds. Medical image computing and computer-assisted intervention: MICCAI 2015. Springer, Cham; 2015. pp. 234-241.
    CrossRef
  14. Azad R, Aghdam EK, Rauland A, et al. Medical image segmentation review: the success of U-Net. IEEE Trans Pattern Anal Mach Intell 2024;46:10076-10095.
    Pubmed CrossRef
  15. Zhang K, Li J, Ma R, Li G. An end-to-end segmentation network for the temporomandibular joints CBCT image based on 3D U-Net. Paper presented at: 2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI); 2020 Oct 17-19; Chengdu, China. pp. 664-668.
    CrossRef


Title_page_TemplateEngKor
Body_page_TemplateEngKor
December 2024, 49 (4)