Automatic tumor segmentation for GTV and CTV, with direct radiation dose calculation from MRI and CT images.

Project Leaders

Junqiang Ma

In the field of radiotherapy, the manual delineation of tumor target regions by medical professionals is a common practice. Subsequently, the radiation dose required for radiotherapy is calculated based on these annotations. To improve anatomical clarity and minimize patient radiation exposure, MRI images are often registered and fused with CT images for radiotherapy planning. In our research group, we are dedicated to developing a novel multi-modal fusion technique that automates the segmentation of the target area and facilitates radiation dose calculation. By leveraging artificial intelligence (AI) approaches, we aim to enable the evaluation of patients' radiation therapy outcomes and prognosis. This research holds significant potential to enhance the accuracy and efficiency of radiotherapy planning, reducing the burden on medical professionals and optimizing treatment strategies for individual patients. By automating the segmentation process and integrating multi-modal imaging data, we can improve the precision of target area delineation and ensure that radiation doses are tailored precisely to each patient's needs. Ultimately, the incorporation of AI techniques has the potential to revolutionize radiotherapy treatment planning and contribute to better patient outcomes in the field of oncology.


Segmentation in radiotherapy

Project Leaders

Junqiang Ma

OAR segmentation:Our organ segmentation method significantly enhances accuracy across pediatric and adult datasets, excelling in delineating both small, complex structures and larger organs, with notable DICE score improvements in challenging areas like the uterocervix, prostate, and gallbladder. However, performance disparities between pediatric and adult groups reveal fairness challenges in cross-dataset predictions, highlighting the need to address population-specific characteristics in model development. These findings underscore the importance of ensuring fairness, robustness, and generalizability in AI-driven anatomical segmentation for diverse demographics, with further research needed to optimize clinical applicability.

NPC segmentation:Our study introduces a novel multi-modal, multi-task deep learning framework designed for simultaneous segmentation of organs at risk (OARs), gross tumor volume (GTV), and clinical target volume (CTV). By jointly learning anatomical structures and tumor targets, the model leverages anatomical prior knowledge embedded in CT data to enhance tumor segmentation accuracy. We further incorporate gradient surgery techniques to mitigate conflicts between segmentation tasks, optimizing gradient interactions to boost model robustness. Experimental results show that this approach delivers substantial performance improvements in multi-modal, multi-task medical image segmentation, highlighting its potential for clinical applications.

Project Example


Segmentation in radiotherapy

Project Leaders

Junqiang Ma

MR-CT Generation: Our groundbreaking unified framework enables seamless MR-to-CT synthesis across all anatomical regions in a single comprehensive model. This innovative solution eliminates the need for multiple region-specific approaches, offering unprecedented versatility while maintaining exceptional image quality and anatomical accuracy throughout the entire body. The technology delivers consistent performance across diverse patient populations and scanning protocols, significantly enhancing clinical workflows in radiation therapy planning, PET-MR attenuation correction, and multimodal image registration while reducing acquisition costs and radiation exposure.

CBCT-CT Generation: Our innovative approach enables CBCT-to-CT conversion across all anatomical regions through a unified computational framework, delivering whole-body image transformation with remarkable consistency and precision without requiring multiple specialized models. This versatile solution transforms low-dose, artifact-prone CBCT acquisitions into diagnostic-quality CT images with exceptional fidelity across all anatomical regions, from cranial structures to pelvic anatomy. The technology enables adaptive radiotherapy planning, enhances dose calculation accuracy, and facilitates longitudinal patient monitoring while maintaining consistent performance across various CBCT systems and acquisition parameters, ultimately improving treatment outcomes while reducing patient imaging burdens.

Project Example


Dose distribution prediction in radiotherapy

Project Leaders

Junqiang Ma

We innovatively introduced the concept of dose channel into the traditional architecture of Generative Adversarial Networks (GAN), and thus constructed the Beam Channel Generative Adversarial Networks (abbreviated as Bc-GAN). Our network has successfully broken through the limitations of previous dose distribution prediction models that could only focus on a single radiotherapy technology (for instance, traditional GANs merely concentrated on one specific technology). It can now achieve precise predictions for mixed datasets of radiotherapy (covering multiple technologies such as Intensity Modulated Radiotherapy (IMRT) and Volumetric Modulated Arc Therapy (VMRT)). This innovative achievement lays a solid foundation for the future realization of automated radiotherapy planning and is expected to propel the field of radiotherapy towards more intelligent and precise development.

We have proposed a novel Swin-UMamba-Channel prediction model specifically designed for predicting the radiotherapy dose distribution in patients with left-sided breast cancer who have undergone total mastectomy. This model, by integrating anatomical positional information of organs with beam angle data, significantly enhances prediction accuracy. The successful development of this model assists physicists in rapidly generating dose-volume histogram (DVH) curves and shortens the treatment planning cycle. This work also provides valuable reference data for subsequent plan optimization and quality control, opening up new avenues for the application of deep learning in the field of radiotherapy.

Project Example

Fig. 1 Comparison examples of predicted and true dose; the plot was obtained by random sampling; the first line: CT images; the second line: the true dose map; the third line: the predicted dose map.

Fig. 2 Comparison of predicted dose and original dose in two cases. The left side (A, C) are the predicted dose image, and the right side (B, D)is the original dose image.


Application of Deep Convolution Network to Automated Image Segmentation in Radiotherapy of Patients with Tumor

Project Leaders

Hui Xie

Partner Organisations

湘南学院附属医院

To automate the delineation of tissues and organs in oncological radiotherapy by integrating deep learning techniques, specifically the fully convolutional network (FCN) and atrous convolution (AC)

A dataset consisting of 120 sets of chest CT images from patients was selected, with structures of normal organs outlined by radiologists. From this dataset, 70 sets (comprising 8,512 axial slice images) were allocated for training, 30 sets (5,525 axial slice images) for validation, and 20 sets (3,602 axial slice images) served as the test set. Five established FCN models were chosen, and subsequently combined with AC algorithms to develop three enhanced deep convolutional networks termed Dilation Fully Convolutional Networks (D-FCN). The training set was used to fine-tune and train each of the eight networks independently. The validation set was employed during the training process to assess the performance of the networks in automatically identifying and delineating organs, thereby determining the optimal segmentation model for each network. Ultimately, the test set was used to evaluate the optimally trained segmentation models, comparing their similarity coefficients (Dice) between automated and manual delineation by physicians.

Following comprehensive fine-tuning and training with the training image set, all networks within this study performed effectively in automated image segmentation. Among them, the advanced D-FCN 4s network model demonstrated superior performance in the test experiment, achieving an overall Dice score of 87.11%, with respective scores for the left lung, right lung, pericardium, trachea, and esophagus being 87.11%, 97.22%, 97.16%, 89.92%, and 70.51%.

We proposed an improved D-FCN model. The experimental results indicate that this network model can significantly enhance the accuracy of automatic segmentation in thoracic radiotherapy images and is capable of performing automatic segmentation of multiple targets simultaneously.

Project Example

The comparison between the automated segmentation delineation of some test cases and the manual delineation results of the radiologist. In the figure, each horizontal line lists a comparison of different test cases. The left side is delineated by physicians and the right side by the D-FCN4s model automatically.