검색
검색 팝업 닫기

Ex) Article Title, Author, Keywords

Article

Split Viewer

Original Article

Progress in Medical Physics 2024; 35(4): 205-213

Published online December 31, 2024

https://doi.org/10.14316/pmp.2024.35.4.205

Copyright © Korean Society of Medical Physics.

Institution-Specific Autosegmentation for Personalized Radiotherapy Protocols

Wonyoung Cho1 , Gyu Sang Yoo2 , Won Dong Kim2 , Yerim Kim1 , Jin Sung Kim1,3 , Byung Jun Min2

1Oncosoft Inc., Seoul, 2Department of Radiation Oncology, Chungbuk National University Hospital, Chungbuk National University College of Medicine, Cheongju, 3Department of Radiation Oncology, Yonsei University College of Medicine, Seoul, Korea

Correspondence to:Byung Jun Min
(bjmin@cbnuh.or.kr)
Tel: 82-43-269-7498
Fax: 82-43-269-6210

Jin Sung Kim
(jinsung@yuhs.ac)
Tel: 82-2-2228-8118
Fax: 82-2-2227-7823

Received: November 25, 2024; Revised: December 15, 2024; Accepted: December 17, 2024

This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Purpose: This study explores the potential of artificial intelligence (AI) in optimizing radiotherapy protocols for personalized cancer treatment. Specifically, it investigates the role of AI-based segmentation tools in improving accuracy and efficiency across various anatomical regions.
Methods: A dataset of 500 anonymized patient computed tomography scans from Chungbuk National University Hospital was used to develop and validate AI models for segmenting organs-atrisk. The models were tailored for five anatomical regions: head and neck, chest, abdomen, breast, and pelvis. Performance was evaluated using Dice Similarity Coefficient (DSC), Mean Surface Distance, and the 95th Percentile Hausdorff Distance (HD95).
Results: The AI models achieved high segmentation accuracy for large, well-defined structures such as the brain, lungs, and liver, with DSC values exceeding 0.95 in many cases. However, challenges were observed for smaller or complex structures, including the optic chiasm and rectum, with instances of segmentation failure and infinity values for HD95. These findings highlight the variability in performance depending on anatomical complexity and structure size.
Conclusions: AI-based segmentation tools demonstrate significant potential to streamline radiotherapy workflows, reduce inter-observer variability, and enhance treatment accuracy. Despite challenges with smaller structures, the integration of AI enables dynamic, patient-specific adaptations to anatomical changes, contributing to more precise and effective cancer treatments. Future work should focus on refining models for anatomically complex structures and validating these methods in diverse clinical settings.

KeywordsAI-driven segmentation, Personalized cancer treatment, Adaptive radiotherapy, Transfer learning

Advances in radiotherapy have significantly improved cancer treatment outcomes by enabling precise delivery of radiation to malignant tissues while minimizing damage to surrounding healthy structures [1]. Despite these advancements, traditional radiotherapy protocols often rely on manual processes that are time-consuming, prone to variability, and limited in their adaptability to dynamic patient-specific changes [2,3]. Artificial intelligence (AI) has emerged as a transformative tool, with the potential to revolutionize radiotherapy by optimizing protocols at every stage, from imaging and segmentation to treatment planning and adaptive delivery [4-6].

The optimization of radiotherapy protocols using AI centers on improving accuracy, efficiency, and personalization. At the heart of this transformation lies AI-driven segmentation, where automation of tumor and organ-at-risk (OAR) delineation enhances workflow efficiency and reduces inter-observer variability [7-9]. AI-powered segmentation reduces inter-observer variability and accelerates the planning process, providing clinicians with highly accurate and consistent delineations [10-14]. Beyond segmentation, AI contributes to dose prediction, adaptive planning, and real-time tracking of anatomical changes, creating opportunities for fully patient-specific and adaptive radiotherapy protocols [5,15].

This study explores the role of AI in radiotherapy protocol optimization, focusing on the development and validation of automated segmentation models tailored to various anatomical regions. Using data from Chungbuk National University Hospital, the study evaluates the performance of AI-based segmentation models and highlights their potential to streamline clinical workflows while enhancing treatment accuracy. While segmentation serves as a foundational element of this research, it is positioned within the broader context of personalized radiotherapy, where AI integration extends to treatment plan adjustments and quality assurance.

The ultimate goal of AI-based radiotherapy protocol optimization is to achieve a paradigm shift in cancer treatment. By leveraging AI’s capabilities to automate and enhance critical processes, clinicians can deliver highly personalized, adaptive treatments that respond dynamically to patient-specific anatomical and biological factors. This study underscores the significance of segmentation as a cornerstone of this transformation and provides a basis for advancing AI-driven solutions in radiotherapy.

1. Study design

This study aimed to optimize and validate AI-based radiotherapy protocols with a focus on improving OAR segmentation for personalized cancer treatment. The OncoStudio (OncoSoft Inc.), an AI-powered solution designed for auto-segmentation tasks, was utilized. Transfer learning was applied for head and neck, chest, abdomen, and pelvic regions, while a custom segmentation model was developed to verify applicability to adaptive radiotherapy for breast case as a clinical target volume (CTV).

2. Data collection and composition

This study utilized 500 anonymized computed tomography (CT) scans collected from Chungbuk National University Hospital’s (CBNUH) clinical database. The dataset comprised cases evenly distributed across five anatomical regions: head and neck (100 cases), chest (100 cases), breast (100 cases), abdomen (100 cases), and pelvis (100 cases), reflecting diverse anatomical and pathological variations (Table 1). Notably, the breast cases included specific CTV contour shapes, necessitating the development of a custom segmentation model. This diverse dataset provided a robust foundation for evaluating AI-based segmentation performance across varied anatomical structures and clinical contexts.

Table 1 Dataset composition: regions, number of cases, and key anatomical structures segmented

RegionNumber of casesKey structures segmented
Head and neck100Bone mandible, brain, esophagus, eyes, submandibular glands, lenses, optic chiasm, optic nerves, parotids, spinal cord
Chest100Aorta, heart, lungs
Abdomen100Kidneys, liver, spleen
Breast100Breasts
Pelvis100Bladder, femurs, rectum

3. Data preprocessing

To ensure compatibility with the AI system and improve segmentation performance, all CT images underwent a structured preprocessing pipeline. The intensity values were first normalized to account for variations in acquisition settings. This involved clipping the intensity range to −1,024 to 5,000 HU and performing percentile normalization, where the 1st and 99th percentiles were calculated and used to scale the values between 0 and 1. Additionally, noise reduction was applied using Gaussian filtering to enhance image clarity while preserving critical anatomical structures. OAR boundaries were manually delineated by certified radiation oncologists to establish ground truth labels for supervised learning and validation.

4. AI model development

For head and neck, chest, abdomen, and pelvis regions, a pre-trained modified U-Net model was fine-tuned using transfer learning. The base model, provided by the vendor, was pre-trained on large-scale, commercially and publicly available datasets and were further refined using the CBNUH dataset to align with our institutional protocols.

A separate segmentation model was developed for breast cases due to the shape of the breast being intended for use as a CTV rather than normal tissue. This model was trained from scratch using the breast-specific dataset and incorporated specialized adjustments to handle distinct contouring patterns. Specifically, we defined the breast region to include the lymph node areas where tumor spread is possible and applied various data augmentations during the training process to enhance model robustness and adaptability.

The dataset was divided into three subsets for model training and evaluation. 80% of dataset was used to train the models, 10% was used for hyperparameter tuning and performance monitoring. The last remained 10% was used to evaluate final model performance.

5. Performance evaluation

The segmentation performance of the models was assessed using the three metrics. To measure the overlap accuracy between predicted and ground truth segmentations Dice Similarity Coefficient (DSC) was used. Mean Surface Distance (MSD) was sued to evaluate the average deviation between predicted and ground truth contours. 95th Percentile Hausdorff Distance (HD95) was assessed for worst-case segmentation boundary errors.

6. Validation and quality assurance

AI-generated segmentation results for the test set, comprising 50 cases evenly distributed across the five anatomical regions, were reviewed by radiation oncologists to ensure clinical acceptability. Special attention was given to the breast cases to validate the custom model.

The automated segmentation performance was evaluated for five anatomical regions with 500 cases (Table 2, Fig. 1). The chest region achieved the highest median DSC (0.973 [IQR 0.087]), while the head and neck region exhibited the lowest (DSC 0.878 [0.120]). The abdomen and breast regions demonstrated median DSC values of 0.934 (IQR 0.027) and 0.945 (IQR 0.023), respectively. For MSD, the head and neck region had the lowest deviation (0.278 mm [0.228 mm]), whereas the breast region recorded the highest (0.463 mm [0.292 mm]). The chest and abdomen regions showed intermediate median MSD values of 0.536 mm (IQR 1.525 mm) and 0.437 mm (IQR 0.196 mm), respectively. In terms of HD95, the chest region exhibited the largest variability (4.123 mm [IQR 4.551 mm]), while the head and neck region achieved the smallest median value (1.000 mm [IQR 1.000 mm]). The abdomen and breast regions recorded similar HD95 medians of 2.000 mm, with IQRs of 0.788 mm and 1.300 mm, respectively.

Table 2 Segmentation performance metrics: median and IQR values for DSC, MSD, and HD95 across different anatomical structures and regions

RegionOrganDSCMSDHD95



MedianIQRMedianIQRMedianIQR
Head and neck0.8780.1200.2780.2281.0001.000
Bone mandible0.8890.0530.4230.2021.8761.236
Brain0.9830.0020.2260.0491.0000.000
Esophagus0.8940.0510.1800.0921.0000.121
Left eye0.9040.0180.2900.1111.0000.000
Right eye0.8990.0140.3230.0591.0000.071
Left submandibular gland0.8870.0690.3430.3141.3210.414
Right submandibular gland0.8860.0660.3190.2531.0000.763
Left lens0.7870.1050.1530.0821.0000.000
Right lens0.7860.1150.1930.1861.0000.041
Optic chiasm0.5950.1060.3900.1182.0500.636
Left optic nerve0.7410.1100.2200.1421.4141.000
Right optic nerve0.6670.3040.3990.4702.2362.303
Left parotid gland0.9360.0450.2750.2581.0000.856
Right parotid gland0.9020.0670.5360.3802.2363.596
Spinal cord0.8380.0690.3200.2931.0001.263
Chest0.9730.0870.5361.5254.1234.551
Aorta0.7410.0763.0081.15225.0204.848
Heart0.8990.0211.6530.5676.0002.500
Lungs0.9760.0060.4170.1443.1882.104
Left lung0.9760.0080.3520.1142.4491.643
Right lung0.9760.0050.4810.1733.9272.566
Abdomen0.9340.0270.4370.1962.0000.788
Left kidney0.9330.0130.3850.1491.4140.866
Right kidney0.9350.0200.4150.2341.8771.118
Liver0.9600.0160.5110.1262.0000.229
Spleen0.9210.0250.5570.3032.4491.826
Breast0.9450.0230.4630.2922.0001.300
Left breast0.9510.0120.3630.0991.3830.707
Right breast0.9320.0380.6420.1982.4490.678
Pelvis0.8720.1440.6761.4223.80311.761
Bladder0.8660.1030.4870.6883.0005.082
Left femur0.9660.1420.1451.5871.00016.879
Right femur0.9660.1310.1371.5421.00015.325
Rectum0.8230.0790.8370.8825.0005.855

Figure 1.Boxplots of segmentation performance metrics across regions (head and neck, chest, abdomen, breast, pelvis), highlighting accuracy (DSC), surface agreement (MSD), and boundary deviations (HD95). DSC, Dice Similarity Coefficient; MSD, Mean Surface Distance; HD95, 95th Percentile Hausdorff Distance.

1. Head and neck region

In the head and neck region, the model achieved high segmentation accuracy for critical structures such as the brain (DSC 0.983 [IQR 0.002], MSD 0.226 mm [IQR 0.049 mm]). However, smaller structures like the optic chiasm presented challenges (DSC 0.595 [0.106], MSD 0.390 mm [0.118 mm]). HD95 values were substantial for certain organs, specifically the optic nerves and parotid glands, with instances of inf, indicating no overlap in some predictions.

2. Chest region

The segmentation of larger organs in the chest region, such as the lungs, yielded excellent results (DSC 0.976 [IQR 0.006], MSD 0.417 mm [0.144 mm]). However, the aorta segmentation displayed lower accuracy (DSC 0.741 [0.076]) and higher HD95 values (25.020 mm [4.848 mm]), suggesting challenges in accurately delineating vascular structures.

3. Abdomen region

In the abdomen, the segmentation performance was consistent across organs such as the kidneys and liver. The liver exhibited the highest accuracy (DSC 0.960 [IQR 0.016], MSD 0.511 mm [0.126 mm]). Conversely, the spleen demonstrated slightly lower accuracy (DSC 0.921 [0.025], MSD 0.557 mm [0.303 mm]). HD95 values remained moderate across most organs, averaging 2.449 mm with IQR 1.826 mm.

4. Breast region

The segmentation performance for the left breast achi­eved a DSC of 0.951 with IQR 0.012 and an MSD of 0.363 mm with 0.099 mm. For the right breast, results were slightly lower (DSC 0.932 [IQR 0.038], MSD 0.642 mm [0.198 mm]). HD95 values in both cases were consistent, averaging 1.383 mm (left) and 2.449 mm (right).

5. Pelvic region

In the pelvic region, segmentation results were mixed. The femurs showed high accuracy (DSC 0.966 [IQR 0.137], MSD 0.141 mm [1.564 mm]), while structures such as the rectum were more challenging (DSC 0.823 [0.079], MSD 0.837 mm [0.882 mm]). HD95 values were particularly high for some structures, such as the bladder (HD95 5.000 mm [5.855 mm]).

This study demonstrates the potential of AI-based automated segmentation tools to enhance radiotherapy planning by improving accuracy and efficiency across diverse anatomical regions. High segmentation accuracy for large, well-defined structures, such as the brain, lungs, and liver, validates the reliability of AI-driven models in delineating critical organs for precise radiotherapy. These results provide a strong foundation for optimizing treatment protocols, facilitating precise dose distribution, and reducing inter-observer variability.

The integration of AI-driven segmentation into radiotherapy workflows directly supports the development of accurate and adaptive treatment protocols. By automating segmentation, AI reduces manual workload and ensures consistency, enabling dynamic adjustments to treatment plans in response to patient-specific anatomical changes during therapy. This capability is central to achieving fully personalized radiotherapy.

Despite these strengths, challenges persist with smaller or anatomically complex structures, such as the optic chiasm and rectum. Lower segmentation accuracy and instances of HD95 values of infinity highlight limitations caused by imaging constraints, limited training data, and the inherent difficulty of delineating low-contrast or irregularly shaped regions. Improving segmentation for these structures is critical, as accurate delineation impacts dose delivery and organ preservation. Incorporating multimodal imaging, such as magnetic resonance imaging (MRI) or positron emission tomography (PET), could address these issues by providing better contrast and functional data [16-18]. Expanding the validation process to include multicenter datasets and diverse patient populations would also help ensure the generalizability and robustness of AI models in varied clinical settings [19,20].

Variability observed in breast and pelvic segmentation reflects the importance of adapting AI models to institutional practices and anatomical complexities. While the performance of the vendor-provided and fine-tuned models was comparable in regions like the chest, abdomen, and pelvis, a significant improvement was observed in the head and neck region following fine-tuning (Fig. 2). This highlights the benefit of transfer learning for regions with complex anatomical structures and higher inter-observer variability. For instance, anatomical distortions caused by abdominal compression devices significantly impacted bladder and rectum segmentation. Such findings highlight the need for AI models capable of accounting for dynamic anatomical changes, further advancing adaptive treatment protocols.

Figure 2.Comparison of Dice Similarity Coefficient (DSC) values across various anatomical regions (head and neck, chest, abdomen, breast, and pelvis) between the default vendor-provided segmentation model (green) and the fine-tuned model (orange). The boxplots represent the distribution of segmentation accuracy for each region, highlighting improvements achieved with fine-tuning.

A fully automated workflow system would not only streamline processes but also enhance treatment precision by minimizing human variability across the entire radiotherapy workflow. For example, automated segmentation could feed directly into dose planning algorithms, while real-time adaptive adjustments could leverage AI-driven tracking of anatomical changes. These innovations would enable continuous, dynamic personalization of treatment protocols, further advancing radiotherapy precision and efficiency. Achieving this vision will require advances in algorithm interoperability, infrastructure optimization, and integration of multimodal imaging for comprehensive and automated decision-making.

This study has several limitations that should be acknowledged. First, the transfer learning allowed us to achieve segmentation results that aligned with our institution's protocols; however, contouring in complex areas still required manual corrections. This highlights the need to investigate whether additional data or alternative algorithms could further improve accuracy in such challenging regions. Second, the AI models used in this study rely solely on CT imaging and must account for anatomical structures when inferring OARs or target areas. While CT imaging provides valuable information, its limitations can be addressed by incorporating multimodal data, such as MRI or PET imaging, or leveraging advanced technologies like large language models with electronic medical records or electronic health records for anatomical and prescription understanding. Such approaches could enhance the accuracy and robustness of AI predictions. Third, this study does not include tumor segmentation models. Tumor segmentation remains a challenging task due to the heterogeneity of tumor shapes and textures, particularly when relying solely on CT imaging. This limitation highlights the need for multimodal imaging approaches to improve segmentation accuracy and enable more comprehensive modeling. Finally, there are technical limitations associated with the integration of AI-based treatment planning systems into clinical workflows. These systems, while promising, are not yet fully compatible with existing radiotherapy workflows, creating challenges for their adoption. However, ongoing advancements in AI algorithms and workflow customization are expected to overcome these integration barriers in the near future.

This study demonstrates the potential of AI-driven segmentation tools to transform radiotherapy by enhancing accuracy, efficiency, and personalization across diverse anatomical regions. While reliable for large, well-defined structures, challenges with smaller or complex regions emphasize the need for multimodal imaging, additional data, and algorithmic advancements. Transfer learning successfully aligned outputs with institutional protocols, though manual corrections remain necessary for complex contours. Integrating AI into radiotherapy workflows can streamline processes and enable personalized care, but addressing challenges such as generalizability and workflow compatibility is essential. Continued advancements in AI and multimodal imaging will be critical to achieving fully automated and adaptive cancer treatment.

This research has been supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00208829), funded by the Ministry of Health & Welfare, Republic of Korea (No. RS-2023-KH136094).

The data used in this study were obtained from Chungbuk National University Hospital and consist of anonymized patient imaging records. Due to the sensitive nature of hospital data and ethical considerations, access to these datasets is restricted. Researchers seeking access to the data must provide a justified request and obtain approval from Chungbuk National University Hospital’s Institutional Review Board. For inquiries, please contact to Byung Jun Min (Email: bjmin@cbnuh.or.kr).

Conceptualization: Wonyoung Cho, Byung Jun Min, Jin Sung Kim. Data curation: Wonyoung Cho, Byung Jun Min. Formal analysis: Wonyoung Cho, Byung Jun Min. Supervision: Jin Sung Kim, Byung Jun Min. Writing – original draft: Wonyoung Cho, Byung Jun Min. Writing – review & editing: Wonyoung Cho, Gyu Sang Yoo, Won Dong Kim, Yerim Kim, Jin Sung Kim, Byung Jun Min.

This study was conducted in accordance with the ethical guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of Chungbuk National University Hospital (IRB Approval Number: 2024-08-006-001). The requirement to obtain informed consent was waived.

  1. Chen HHW, Kuo MT. Improving radiotherapy in cancer treatment: promises and challenges. Oncotarget. 2017;8:62742-62758.
    Pubmed KoreaMed CrossRef
  2. Baroudi H, Brock KK, Cao W, Chen X, Chung C, Court LE, et al. Automated contouring and planning in radiation therapy: what is 'clinically acceptable'? Diagnostics (Basel). 2023;13:667.
    Pubmed KoreaMed CrossRef
  3. Dona Lemus OM, Cao M, Cai B, Cummings M, Zheng D. Adaptive radiotherapy: next-generation radiotherapy. Cancers (Basel). 2024;16:1206.
    Pubmed KoreaMed CrossRef
  4. Fu Y, Zhang H, Morris ED, Glide-Hurst CK, Pai S, Traverso A, et al. Artificial intelligence in radiation therapy. IEEE Trans Radiat Plasma Med Sci. 2022;6:158-181.
    Pubmed KoreaMed CrossRef
  5. Jeong C, Goh Y, Kwak J. Challenges and opportunities to integrate artificial intelligence in radiation oncology: a narrative review. Ewha Med J. 2024;47:e49.
    CrossRef
  6. Krishnamurthy R, Mummudi N, Goda JS, Chopra S, Heijmen B, Swamidas J. Using artificial intelligence for optimization of the processes and resource utilization in radiotherapy. JCO Glob Oncol. 2022;8:e2100393.
    Pubmed KoreaMed CrossRef
  7. Kawamura M, Kamomae T, Yanagawa M, Kamagata K, Fujita S, Ueda D, et al. Revolutionizing radiation therapy: the role of AI in clinical practice. J Radiat Res. 2024;65:1-9.
    Pubmed KoreaMed CrossRef
  8. Wong J, Huang V, Wells D, Giambattista J, Giambattista J, Kolbeck C, et al. Implementation of deep learning-based auto-segmentation for radiotherapy planning structures: a workflow study at two cancer centers. Radiat Oncol. 2021;16:101.
    Pubmed KoreaMed CrossRef
  9. Erdur AC, Rusche D, Scholz D, Kiechle J, Fischer S, Llorián-Salvador Ó, et al. Deep learning for autosegmentation for radiotherapy treatment planning: state-of-the-art and novel perspectives. Strahlenther Onkol. 2014. doi: 10.1007/s00066-024-02262-2.
    Pubmed CrossRef
  10. Doolan PJ, Charalambous S, Roussakis Y, Leczynski A, Peratikou M, Benjamin M, et al. A clinical evaluation of the performance of five commercial artificial intelligence contouring systems for radiotherapy. Front Oncol. 2023;13:1213068.
    Pubmed KoreaMed CrossRef
  11. Hoque SMH, Pirrone G, Matrone F, Donofrio A, Fanetti G, Caroli A, et al. Clinical use of a commercial artificial intelligence-based software for autocontouring in radiation therapy: geometric performance and dosimetric impact. Cancers (Basel). 2023;15:5735.
    Pubmed KoreaMed CrossRef
  12. Shi F, Hu W, Wu J, Han M, Wang J, Zhang W, et al. Deep learning empowered volume delineation of whole-body organs-at-risk for accelerated radiotherapy. Nat Commun. 2022;13:6566.
    Pubmed KoreaMed CrossRef
  13. Smine Z, Poeta S, De Caluwé A, Desmet A, Garibaldi C, Brou Boni K, et al. Automated segmentation in planning-CT for breast cancer radiotherapy: a review of recent advances. Radiother Oncol. 2025;202:110615.
    Pubmed CrossRef
  14. Warren S, Richmond N, Wowk A, Wilkinson M, Wright K. AI segmentation as a quality improvement tool in radiotherapy planning for breast cancer. IPEM Transl. 2023;6-8:100020.
    CrossRef
  15. Kouhen F, Gouach HE, Saidi K, Dahbi Z, Errafiy N, Elmarrachi H, et al. Synergizing expertise and technology: the artificial intelligence revolution in radiotherapy for personalized and precise cancer treatment. Gulf J Oncolog. 2024;1:94-102.
    CrossRef
  16. Ren J, Eriksen JG, Nijkamp J, Korreman SS. Comparing different CT, PET and MRI multi-modality image combinations for deep learning-based head and neck tumor segmentation. Acta Oncol. 2021;60:1399-1406.
    Pubmed CrossRef
  17. Song J, Zheng J, Li P, Lu X, Zhu G, Shen P. An effective multimodal image fusion method using MRI and PET for Alzheimer's disease diagnosis. Front Digit Health. 2021;3:637386.
    Pubmed KoreaMed CrossRef
  18. Basu S, Singhal S, Singh D. A systematic literature review on multimodal medical image fusion. Multimed Tools Appl. 2024;83:15845-15913.
    CrossRef
  19. Yang J, Soltan AAS, Clifton DA. Machine learning generalizability across healthcare settings: insights from multi-site COVID-19 screening. NPJ Digit Med. 2022;5:69.
    Pubmed KoreaMed CrossRef
  20. Tripathi S, Gabriel K, Dheer S, Parajuli A, Augustin AI, Elahi A, et al. Understanding biases and disparities in radiology AI datasets: a review. J Am Coll Radiol. 2023;20:836-841.
    Pubmed CrossRef

Article

Original Article

Progress in Medical Physics 2024; 35(4): 205-213

Published online December 31, 2024 https://doi.org/10.14316/pmp.2024.35.4.205

Copyright © Korean Society of Medical Physics.

Institution-Specific Autosegmentation for Personalized Radiotherapy Protocols

Wonyoung Cho1 , Gyu Sang Yoo2 , Won Dong Kim2 , Yerim Kim1 , Jin Sung Kim1,3 , Byung Jun Min2

1Oncosoft Inc., Seoul, 2Department of Radiation Oncology, Chungbuk National University Hospital, Chungbuk National University College of Medicine, Cheongju, 3Department of Radiation Oncology, Yonsei University College of Medicine, Seoul, Korea

Correspondence to:Byung Jun Min
(bjmin@cbnuh.or.kr)
Tel: 82-43-269-7498
Fax: 82-43-269-6210

Jin Sung Kim
(jinsung@yuhs.ac)
Tel: 82-2-2228-8118
Fax: 82-2-2227-7823

Received: November 25, 2024; Revised: December 15, 2024; Accepted: December 17, 2024

This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Purpose: This study explores the potential of artificial intelligence (AI) in optimizing radiotherapy protocols for personalized cancer treatment. Specifically, it investigates the role of AI-based segmentation tools in improving accuracy and efficiency across various anatomical regions.
Methods: A dataset of 500 anonymized patient computed tomography scans from Chungbuk National University Hospital was used to develop and validate AI models for segmenting organs-atrisk. The models were tailored for five anatomical regions: head and neck, chest, abdomen, breast, and pelvis. Performance was evaluated using Dice Similarity Coefficient (DSC), Mean Surface Distance, and the 95th Percentile Hausdorff Distance (HD95).
Results: The AI models achieved high segmentation accuracy for large, well-defined structures such as the brain, lungs, and liver, with DSC values exceeding 0.95 in many cases. However, challenges were observed for smaller or complex structures, including the optic chiasm and rectum, with instances of segmentation failure and infinity values for HD95. These findings highlight the variability in performance depending on anatomical complexity and structure size.
Conclusions: AI-based segmentation tools demonstrate significant potential to streamline radiotherapy workflows, reduce inter-observer variability, and enhance treatment accuracy. Despite challenges with smaller structures, the integration of AI enables dynamic, patient-specific adaptations to anatomical changes, contributing to more precise and effective cancer treatments. Future work should focus on refining models for anatomically complex structures and validating these methods in diverse clinical settings.

Keywords: AI-driven segmentation, Personalized cancer treatment, Adaptive radiotherapy, Transfer learning

Introduction

Advances in radiotherapy have significantly improved cancer treatment outcomes by enabling precise delivery of radiation to malignant tissues while minimizing damage to surrounding healthy structures [1]. Despite these advancements, traditional radiotherapy protocols often rely on manual processes that are time-consuming, prone to variability, and limited in their adaptability to dynamic patient-specific changes [2,3]. Artificial intelligence (AI) has emerged as a transformative tool, with the potential to revolutionize radiotherapy by optimizing protocols at every stage, from imaging and segmentation to treatment planning and adaptive delivery [4-6].

The optimization of radiotherapy protocols using AI centers on improving accuracy, efficiency, and personalization. At the heart of this transformation lies AI-driven segmentation, where automation of tumor and organ-at-risk (OAR) delineation enhances workflow efficiency and reduces inter-observer variability [7-9]. AI-powered segmentation reduces inter-observer variability and accelerates the planning process, providing clinicians with highly accurate and consistent delineations [10-14]. Beyond segmentation, AI contributes to dose prediction, adaptive planning, and real-time tracking of anatomical changes, creating opportunities for fully patient-specific and adaptive radiotherapy protocols [5,15].

This study explores the role of AI in radiotherapy protocol optimization, focusing on the development and validation of automated segmentation models tailored to various anatomical regions. Using data from Chungbuk National University Hospital, the study evaluates the performance of AI-based segmentation models and highlights their potential to streamline clinical workflows while enhancing treatment accuracy. While segmentation serves as a foundational element of this research, it is positioned within the broader context of personalized radiotherapy, where AI integration extends to treatment plan adjustments and quality assurance.

The ultimate goal of AI-based radiotherapy protocol optimization is to achieve a paradigm shift in cancer treatment. By leveraging AI’s capabilities to automate and enhance critical processes, clinicians can deliver highly personalized, adaptive treatments that respond dynamically to patient-specific anatomical and biological factors. This study underscores the significance of segmentation as a cornerstone of this transformation and provides a basis for advancing AI-driven solutions in radiotherapy.

Materials and Methods

1. Study design

This study aimed to optimize and validate AI-based radiotherapy protocols with a focus on improving OAR segmentation for personalized cancer treatment. The OncoStudio (OncoSoft Inc.), an AI-powered solution designed for auto-segmentation tasks, was utilized. Transfer learning was applied for head and neck, chest, abdomen, and pelvic regions, while a custom segmentation model was developed to verify applicability to adaptive radiotherapy for breast case as a clinical target volume (CTV).

2. Data collection and composition

This study utilized 500 anonymized computed tomography (CT) scans collected from Chungbuk National University Hospital’s (CBNUH) clinical database. The dataset comprised cases evenly distributed across five anatomical regions: head and neck (100 cases), chest (100 cases), breast (100 cases), abdomen (100 cases), and pelvis (100 cases), reflecting diverse anatomical and pathological variations (Table 1). Notably, the breast cases included specific CTV contour shapes, necessitating the development of a custom segmentation model. This diverse dataset provided a robust foundation for evaluating AI-based segmentation performance across varied anatomical structures and clinical contexts.

Table 1 . Dataset composition: regions, number of cases, and key anatomical structures segmented.

RegionNumber of casesKey structures segmented
Head and neck100Bone mandible, brain, esophagus, eyes, submandibular glands, lenses, optic chiasm, optic nerves, parotids, spinal cord
Chest100Aorta, heart, lungs
Abdomen100Kidneys, liver, spleen
Breast100Breasts
Pelvis100Bladder, femurs, rectum


3. Data preprocessing

To ensure compatibility with the AI system and improve segmentation performance, all CT images underwent a structured preprocessing pipeline. The intensity values were first normalized to account for variations in acquisition settings. This involved clipping the intensity range to −1,024 to 5,000 HU and performing percentile normalization, where the 1st and 99th percentiles were calculated and used to scale the values between 0 and 1. Additionally, noise reduction was applied using Gaussian filtering to enhance image clarity while preserving critical anatomical structures. OAR boundaries were manually delineated by certified radiation oncologists to establish ground truth labels for supervised learning and validation.

4. AI model development

For head and neck, chest, abdomen, and pelvis regions, a pre-trained modified U-Net model was fine-tuned using transfer learning. The base model, provided by the vendor, was pre-trained on large-scale, commercially and publicly available datasets and were further refined using the CBNUH dataset to align with our institutional protocols.

A separate segmentation model was developed for breast cases due to the shape of the breast being intended for use as a CTV rather than normal tissue. This model was trained from scratch using the breast-specific dataset and incorporated specialized adjustments to handle distinct contouring patterns. Specifically, we defined the breast region to include the lymph node areas where tumor spread is possible and applied various data augmentations during the training process to enhance model robustness and adaptability.

The dataset was divided into three subsets for model training and evaluation. 80% of dataset was used to train the models, 10% was used for hyperparameter tuning and performance monitoring. The last remained 10% was used to evaluate final model performance.

5. Performance evaluation

The segmentation performance of the models was assessed using the three metrics. To measure the overlap accuracy between predicted and ground truth segmentations Dice Similarity Coefficient (DSC) was used. Mean Surface Distance (MSD) was sued to evaluate the average deviation between predicted and ground truth contours. 95th Percentile Hausdorff Distance (HD95) was assessed for worst-case segmentation boundary errors.

6. Validation and quality assurance

AI-generated segmentation results for the test set, comprising 50 cases evenly distributed across the five anatomical regions, were reviewed by radiation oncologists to ensure clinical acceptability. Special attention was given to the breast cases to validate the custom model.

Results

The automated segmentation performance was evaluated for five anatomical regions with 500 cases (Table 2, Fig. 1). The chest region achieved the highest median DSC (0.973 [IQR 0.087]), while the head and neck region exhibited the lowest (DSC 0.878 [0.120]). The abdomen and breast regions demonstrated median DSC values of 0.934 (IQR 0.027) and 0.945 (IQR 0.023), respectively. For MSD, the head and neck region had the lowest deviation (0.278 mm [0.228 mm]), whereas the breast region recorded the highest (0.463 mm [0.292 mm]). The chest and abdomen regions showed intermediate median MSD values of 0.536 mm (IQR 1.525 mm) and 0.437 mm (IQR 0.196 mm), respectively. In terms of HD95, the chest region exhibited the largest variability (4.123 mm [IQR 4.551 mm]), while the head and neck region achieved the smallest median value (1.000 mm [IQR 1.000 mm]). The abdomen and breast regions recorded similar HD95 medians of 2.000 mm, with IQRs of 0.788 mm and 1.300 mm, respectively.

Table 2 . Segmentation performance metrics: median and IQR values for DSC, MSD, and HD95 across different anatomical structures and regions.

RegionOrganDSCMSDHD95



MedianIQRMedianIQRMedianIQR
Head and neck0.8780.1200.2780.2281.0001.000
Bone mandible0.8890.0530.4230.2021.8761.236
Brain0.9830.0020.2260.0491.0000.000
Esophagus0.8940.0510.1800.0921.0000.121
Left eye0.9040.0180.2900.1111.0000.000
Right eye0.8990.0140.3230.0591.0000.071
Left submandibular gland0.8870.0690.3430.3141.3210.414
Right submandibular gland0.8860.0660.3190.2531.0000.763
Left lens0.7870.1050.1530.0821.0000.000
Right lens0.7860.1150.1930.1861.0000.041
Optic chiasm0.5950.1060.3900.1182.0500.636
Left optic nerve0.7410.1100.2200.1421.4141.000
Right optic nerve0.6670.3040.3990.4702.2362.303
Left parotid gland0.9360.0450.2750.2581.0000.856
Right parotid gland0.9020.0670.5360.3802.2363.596
Spinal cord0.8380.0690.3200.2931.0001.263
Chest0.9730.0870.5361.5254.1234.551
Aorta0.7410.0763.0081.15225.0204.848
Heart0.8990.0211.6530.5676.0002.500
Lungs0.9760.0060.4170.1443.1882.104
Left lung0.9760.0080.3520.1142.4491.643
Right lung0.9760.0050.4810.1733.9272.566
Abdomen0.9340.0270.4370.1962.0000.788
Left kidney0.9330.0130.3850.1491.4140.866
Right kidney0.9350.0200.4150.2341.8771.118
Liver0.9600.0160.5110.1262.0000.229
Spleen0.9210.0250.5570.3032.4491.826
Breast0.9450.0230.4630.2922.0001.300
Left breast0.9510.0120.3630.0991.3830.707
Right breast0.9320.0380.6420.1982.4490.678
Pelvis0.8720.1440.6761.4223.80311.761
Bladder0.8660.1030.4870.6883.0005.082
Left femur0.9660.1420.1451.5871.00016.879
Right femur0.9660.1310.1371.5421.00015.325
Rectum0.8230.0790.8370.8825.0005.855


Figure 1. Boxplots of segmentation performance metrics across regions (head and neck, chest, abdomen, breast, pelvis), highlighting accuracy (DSC), surface agreement (MSD), and boundary deviations (HD95). DSC, Dice Similarity Coefficient; MSD, Mean Surface Distance; HD95, 95th Percentile Hausdorff Distance.

1. Head and neck region

In the head and neck region, the model achieved high segmentation accuracy for critical structures such as the brain (DSC 0.983 [IQR 0.002], MSD 0.226 mm [IQR 0.049 mm]). However, smaller structures like the optic chiasm presented challenges (DSC 0.595 [0.106], MSD 0.390 mm [0.118 mm]). HD95 values were substantial for certain organs, specifically the optic nerves and parotid glands, with instances of inf, indicating no overlap in some predictions.

2. Chest region

The segmentation of larger organs in the chest region, such as the lungs, yielded excellent results (DSC 0.976 [IQR 0.006], MSD 0.417 mm [0.144 mm]). However, the aorta segmentation displayed lower accuracy (DSC 0.741 [0.076]) and higher HD95 values (25.020 mm [4.848 mm]), suggesting challenges in accurately delineating vascular structures.

3. Abdomen region

In the abdomen, the segmentation performance was consistent across organs such as the kidneys and liver. The liver exhibited the highest accuracy (DSC 0.960 [IQR 0.016], MSD 0.511 mm [0.126 mm]). Conversely, the spleen demonstrated slightly lower accuracy (DSC 0.921 [0.025], MSD 0.557 mm [0.303 mm]). HD95 values remained moderate across most organs, averaging 2.449 mm with IQR 1.826 mm.

4. Breast region

The segmentation performance for the left breast achi­eved a DSC of 0.951 with IQR 0.012 and an MSD of 0.363 mm with 0.099 mm. For the right breast, results were slightly lower (DSC 0.932 [IQR 0.038], MSD 0.642 mm [0.198 mm]). HD95 values in both cases were consistent, averaging 1.383 mm (left) and 2.449 mm (right).

5. Pelvic region

In the pelvic region, segmentation results were mixed. The femurs showed high accuracy (DSC 0.966 [IQR 0.137], MSD 0.141 mm [1.564 mm]), while structures such as the rectum were more challenging (DSC 0.823 [0.079], MSD 0.837 mm [0.882 mm]). HD95 values were particularly high for some structures, such as the bladder (HD95 5.000 mm [5.855 mm]).

Discussion

This study demonstrates the potential of AI-based automated segmentation tools to enhance radiotherapy planning by improving accuracy and efficiency across diverse anatomical regions. High segmentation accuracy for large, well-defined structures, such as the brain, lungs, and liver, validates the reliability of AI-driven models in delineating critical organs for precise radiotherapy. These results provide a strong foundation for optimizing treatment protocols, facilitating precise dose distribution, and reducing inter-observer variability.

The integration of AI-driven segmentation into radiotherapy workflows directly supports the development of accurate and adaptive treatment protocols. By automating segmentation, AI reduces manual workload and ensures consistency, enabling dynamic adjustments to treatment plans in response to patient-specific anatomical changes during therapy. This capability is central to achieving fully personalized radiotherapy.

Despite these strengths, challenges persist with smaller or anatomically complex structures, such as the optic chiasm and rectum. Lower segmentation accuracy and instances of HD95 values of infinity highlight limitations caused by imaging constraints, limited training data, and the inherent difficulty of delineating low-contrast or irregularly shaped regions. Improving segmentation for these structures is critical, as accurate delineation impacts dose delivery and organ preservation. Incorporating multimodal imaging, such as magnetic resonance imaging (MRI) or positron emission tomography (PET), could address these issues by providing better contrast and functional data [16-18]. Expanding the validation process to include multicenter datasets and diverse patient populations would also help ensure the generalizability and robustness of AI models in varied clinical settings [19,20].

Variability observed in breast and pelvic segmentation reflects the importance of adapting AI models to institutional practices and anatomical complexities. While the performance of the vendor-provided and fine-tuned models was comparable in regions like the chest, abdomen, and pelvis, a significant improvement was observed in the head and neck region following fine-tuning (Fig. 2). This highlights the benefit of transfer learning for regions with complex anatomical structures and higher inter-observer variability. For instance, anatomical distortions caused by abdominal compression devices significantly impacted bladder and rectum segmentation. Such findings highlight the need for AI models capable of accounting for dynamic anatomical changes, further advancing adaptive treatment protocols.

Figure 2. Comparison of Dice Similarity Coefficient (DSC) values across various anatomical regions (head and neck, chest, abdomen, breast, and pelvis) between the default vendor-provided segmentation model (green) and the fine-tuned model (orange). The boxplots represent the distribution of segmentation accuracy for each region, highlighting improvements achieved with fine-tuning.

A fully automated workflow system would not only streamline processes but also enhance treatment precision by minimizing human variability across the entire radiotherapy workflow. For example, automated segmentation could feed directly into dose planning algorithms, while real-time adaptive adjustments could leverage AI-driven tracking of anatomical changes. These innovations would enable continuous, dynamic personalization of treatment protocols, further advancing radiotherapy precision and efficiency. Achieving this vision will require advances in algorithm interoperability, infrastructure optimization, and integration of multimodal imaging for comprehensive and automated decision-making.

This study has several limitations that should be acknowledged. First, the transfer learning allowed us to achieve segmentation results that aligned with our institution's protocols; however, contouring in complex areas still required manual corrections. This highlights the need to investigate whether additional data or alternative algorithms could further improve accuracy in such challenging regions. Second, the AI models used in this study rely solely on CT imaging and must account for anatomical structures when inferring OARs or target areas. While CT imaging provides valuable information, its limitations can be addressed by incorporating multimodal data, such as MRI or PET imaging, or leveraging advanced technologies like large language models with electronic medical records or electronic health records for anatomical and prescription understanding. Such approaches could enhance the accuracy and robustness of AI predictions. Third, this study does not include tumor segmentation models. Tumor segmentation remains a challenging task due to the heterogeneity of tumor shapes and textures, particularly when relying solely on CT imaging. This limitation highlights the need for multimodal imaging approaches to improve segmentation accuracy and enable more comprehensive modeling. Finally, there are technical limitations associated with the integration of AI-based treatment planning systems into clinical workflows. These systems, while promising, are not yet fully compatible with existing radiotherapy workflows, creating challenges for their adoption. However, ongoing advancements in AI algorithms and workflow customization are expected to overcome these integration barriers in the near future.

Conclusions

This study demonstrates the potential of AI-driven segmentation tools to transform radiotherapy by enhancing accuracy, efficiency, and personalization across diverse anatomical regions. While reliable for large, well-defined structures, challenges with smaller or complex regions emphasize the need for multimodal imaging, additional data, and algorithmic advancements. Transfer learning successfully aligned outputs with institutional protocols, though manual corrections remain necessary for complex contours. Integrating AI into radiotherapy workflows can streamline processes and enable personalized care, but addressing challenges such as generalizability and workflow compatibility is essential. Continued advancements in AI and multimodal imaging will be critical to achieving fully automated and adaptive cancer treatment.

Funding

This research has been supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00208829), funded by the Ministry of Health & Welfare, Republic of Korea (No. RS-2023-KH136094).

Conflicts of Interest

The authors have nothing to disclose.

Availability of Data and Materials

The data used in this study were obtained from Chungbuk National University Hospital and consist of anonymized patient imaging records. Due to the sensitive nature of hospital data and ethical considerations, access to these datasets is restricted. Researchers seeking access to the data must provide a justified request and obtain approval from Chungbuk National University Hospital’s Institutional Review Board. For inquiries, please contact to Byung Jun Min (Email: bjmin@cbnuh.or.kr).

Author Contributions

Conceptualization: Wonyoung Cho, Byung Jun Min, Jin Sung Kim. Data curation: Wonyoung Cho, Byung Jun Min. Formal analysis: Wonyoung Cho, Byung Jun Min. Supervision: Jin Sung Kim, Byung Jun Min. Writing – original draft: Wonyoung Cho, Byung Jun Min. Writing – review & editing: Wonyoung Cho, Gyu Sang Yoo, Won Dong Kim, Yerim Kim, Jin Sung Kim, Byung Jun Min.

Ethics Approval and Consent to Participate

This study was conducted in accordance with the ethical guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of Chungbuk National University Hospital (IRB Approval Number: 2024-08-006-001). The requirement to obtain informed consent was waived.

Fig 1.

Figure 1.Boxplots of segmentation performance metrics across regions (head and neck, chest, abdomen, breast, pelvis), highlighting accuracy (DSC), surface agreement (MSD), and boundary deviations (HD95). DSC, Dice Similarity Coefficient; MSD, Mean Surface Distance; HD95, 95th Percentile Hausdorff Distance.
Progress in Medical Physics 2024; 35: 205-213https://doi.org/10.14316/pmp.2024.35.4.205

Fig 2.

Figure 2.Comparison of Dice Similarity Coefficient (DSC) values across various anatomical regions (head and neck, chest, abdomen, breast, and pelvis) between the default vendor-provided segmentation model (green) and the fine-tuned model (orange). The boxplots represent the distribution of segmentation accuracy for each region, highlighting improvements achieved with fine-tuning.
Progress in Medical Physics 2024; 35: 205-213https://doi.org/10.14316/pmp.2024.35.4.205

Table 1 Dataset composition: regions, number of cases, and key anatomical structures segmented

RegionNumber of casesKey structures segmented
Head and neck100Bone mandible, brain, esophagus, eyes, submandibular glands, lenses, optic chiasm, optic nerves, parotids, spinal cord
Chest100Aorta, heart, lungs
Abdomen100Kidneys, liver, spleen
Breast100Breasts
Pelvis100Bladder, femurs, rectum

Table 2 Segmentation performance metrics: median and IQR values for DSC, MSD, and HD95 across different anatomical structures and regions

RegionOrganDSCMSDHD95



MedianIQRMedianIQRMedianIQR
Head and neck0.8780.1200.2780.2281.0001.000
Bone mandible0.8890.0530.4230.2021.8761.236
Brain0.9830.0020.2260.0491.0000.000
Esophagus0.8940.0510.1800.0921.0000.121
Left eye0.9040.0180.2900.1111.0000.000
Right eye0.8990.0140.3230.0591.0000.071
Left submandibular gland0.8870.0690.3430.3141.3210.414
Right submandibular gland0.8860.0660.3190.2531.0000.763
Left lens0.7870.1050.1530.0821.0000.000
Right lens0.7860.1150.1930.1861.0000.041
Optic chiasm0.5950.1060.3900.1182.0500.636
Left optic nerve0.7410.1100.2200.1421.4141.000
Right optic nerve0.6670.3040.3990.4702.2362.303
Left parotid gland0.9360.0450.2750.2581.0000.856
Right parotid gland0.9020.0670.5360.3802.2363.596
Spinal cord0.8380.0690.3200.2931.0001.263
Chest0.9730.0870.5361.5254.1234.551
Aorta0.7410.0763.0081.15225.0204.848
Heart0.8990.0211.6530.5676.0002.500
Lungs0.9760.0060.4170.1443.1882.104
Left lung0.9760.0080.3520.1142.4491.643
Right lung0.9760.0050.4810.1733.9272.566
Abdomen0.9340.0270.4370.1962.0000.788
Left kidney0.9330.0130.3850.1491.4140.866
Right kidney0.9350.0200.4150.2341.8771.118
Liver0.9600.0160.5110.1262.0000.229
Spleen0.9210.0250.5570.3032.4491.826
Breast0.9450.0230.4630.2922.0001.300
Left breast0.9510.0120.3630.0991.3830.707
Right breast0.9320.0380.6420.1982.4490.678
Pelvis0.8720.1440.6761.4223.80311.761
Bladder0.8660.1030.4870.6883.0005.082
Left femur0.9660.1420.1451.5871.00016.879
Right femur0.9660.1310.1371.5421.00015.325
Rectum0.8230.0790.8370.8825.0005.855

References

  1. Chen HHW, Kuo MT. Improving radiotherapy in cancer treatment: promises and challenges. Oncotarget. 2017;8:62742-62758.
    Pubmed KoreaMed CrossRef
  2. Baroudi H, Brock KK, Cao W, Chen X, Chung C, Court LE, et al. Automated contouring and planning in radiation therapy: what is 'clinically acceptable'? Diagnostics (Basel). 2023;13:667.
    Pubmed KoreaMed CrossRef
  3. Dona Lemus OM, Cao M, Cai B, Cummings M, Zheng D. Adaptive radiotherapy: next-generation radiotherapy. Cancers (Basel). 2024;16:1206.
    Pubmed KoreaMed CrossRef
  4. Fu Y, Zhang H, Morris ED, Glide-Hurst CK, Pai S, Traverso A, et al. Artificial intelligence in radiation therapy. IEEE Trans Radiat Plasma Med Sci. 2022;6:158-181.
    Pubmed KoreaMed CrossRef
  5. Jeong C, Goh Y, Kwak J. Challenges and opportunities to integrate artificial intelligence in radiation oncology: a narrative review. Ewha Med J. 2024;47:e49.
    CrossRef
  6. Krishnamurthy R, Mummudi N, Goda JS, Chopra S, Heijmen B, Swamidas J. Using artificial intelligence for optimization of the processes and resource utilization in radiotherapy. JCO Glob Oncol. 2022;8:e2100393.
    Pubmed KoreaMed CrossRef
  7. Kawamura M, Kamomae T, Yanagawa M, Kamagata K, Fujita S, Ueda D, et al. Revolutionizing radiation therapy: the role of AI in clinical practice. J Radiat Res. 2024;65:1-9.
    Pubmed KoreaMed CrossRef
  8. Wong J, Huang V, Wells D, Giambattista J, Giambattista J, Kolbeck C, et al. Implementation of deep learning-based auto-segmentation for radiotherapy planning structures: a workflow study at two cancer centers. Radiat Oncol. 2021;16:101.
    Pubmed KoreaMed CrossRef
  9. Erdur AC, Rusche D, Scholz D, Kiechle J, Fischer S, Llorián-Salvador Ó, et al. Deep learning for autosegmentation for radiotherapy treatment planning: state-of-the-art and novel perspectives. Strahlenther Onkol. 2014. doi: 10.1007/s00066-024-02262-2.
    Pubmed CrossRef
  10. Doolan PJ, Charalambous S, Roussakis Y, Leczynski A, Peratikou M, Benjamin M, et al. A clinical evaluation of the performance of five commercial artificial intelligence contouring systems for radiotherapy. Front Oncol. 2023;13:1213068.
    Pubmed KoreaMed CrossRef
  11. Hoque SMH, Pirrone G, Matrone F, Donofrio A, Fanetti G, Caroli A, et al. Clinical use of a commercial artificial intelligence-based software for autocontouring in radiation therapy: geometric performance and dosimetric impact. Cancers (Basel). 2023;15:5735.
    Pubmed KoreaMed CrossRef
  12. Shi F, Hu W, Wu J, Han M, Wang J, Zhang W, et al. Deep learning empowered volume delineation of whole-body organs-at-risk for accelerated radiotherapy. Nat Commun. 2022;13:6566.
    Pubmed KoreaMed CrossRef
  13. Smine Z, Poeta S, De Caluwé A, Desmet A, Garibaldi C, Brou Boni K, et al. Automated segmentation in planning-CT for breast cancer radiotherapy: a review of recent advances. Radiother Oncol. 2025;202:110615.
    Pubmed CrossRef
  14. Warren S, Richmond N, Wowk A, Wilkinson M, Wright K. AI segmentation as a quality improvement tool in radiotherapy planning for breast cancer. IPEM Transl. 2023;6-8:100020.
    CrossRef
  15. Kouhen F, Gouach HE, Saidi K, Dahbi Z, Errafiy N, Elmarrachi H, et al. Synergizing expertise and technology: the artificial intelligence revolution in radiotherapy for personalized and precise cancer treatment. Gulf J Oncolog. 2024;1:94-102.
    CrossRef
  16. Ren J, Eriksen JG, Nijkamp J, Korreman SS. Comparing different CT, PET and MRI multi-modality image combinations for deep learning-based head and neck tumor segmentation. Acta Oncol. 2021;60:1399-1406.
    Pubmed CrossRef
  17. Song J, Zheng J, Li P, Lu X, Zhu G, Shen P. An effective multimodal image fusion method using MRI and PET for Alzheimer's disease diagnosis. Front Digit Health. 2021;3:637386.
    Pubmed KoreaMed CrossRef
  18. Basu S, Singhal S, Singh D. A systematic literature review on multimodal medical image fusion. Multimed Tools Appl. 2024;83:15845-15913.
    CrossRef
  19. Yang J, Soltan AAS, Clifton DA. Machine learning generalizability across healthcare settings: insights from multi-site COVID-19 screening. NPJ Digit Med. 2022;5:69.
    Pubmed KoreaMed CrossRef
  20. Tripathi S, Gabriel K, Dheer S, Parajuli A, Augustin AI, Elahi A, et al. Understanding biases and disparities in radiology AI datasets: a review. J Am Coll Radiol. 2023;20:836-841.
    Pubmed CrossRef
Korean Society of Medical Physics

Vol.35 No.4
December 2024

pISSN 2508-4445
eISSN 2508-4453
Formerly ISSN 1226-5829

Frequency: Quarterly

Current Issue   |   Archives

Stats or Metrics

Share this article on :

  • line