A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
Efficient contour-based annotation by iterative deep learning for organ segmentation from volumetric medical images (2023)


Zhuang, M., Chen, Z., Wang, H., Tang, H., He, J., Qin, B., Yang, Y., Jin, X., Yu, M., Jin, B., Li, T., & Kettunen, L. (2023). Efficient contour-based annotation by iterative deep learning for organ segmentation from volumetric medical images. International Journal of Computer Assisted Radiology and Surgery, 18(2), 379-394. https://doi.org/10.1007/s11548-022-02730-z


JYU-tekijät tai -toimittajat


Julkaisun tiedot

Julkaisun kaikki tekijät tai toimittajatZhuang, Mingrui; Chen, Zhonghua; Wang, Hongkai; Tang, Hong; He, Jiang; Qin, Bobo; Yang, Yuxin; Jin, Xiaoxian; Yu, Mengzhu; Jin, Baitao; et al.

Lehti tai sarjaInternational Journal of Computer Assisted Radiology and Surgery

ISSN1861-6410

eISSN1861-6429

Julkaisuvuosi2023

Ilmestymispäivä01.09.2022

Volyymi18

Lehden numero2

Artikkelin sivunumerot379-394

KustantajaSpringer

JulkaisumaaSaksa

Julkaisun kielienglanti

DOIhttps://doi.org/10.1007/s11548-022-02730-z

Julkaisun avoin saatavuusAvoimesti saatavilla

Julkaisukanavan avoin saatavuusOsittain avoin julkaisukanava

Julkaisu on rinnakkaistallennettu (JYX)https://jyx.jyu.fi/handle/123456789/82925


Tiivistelmä

Purpose
Training deep neural networks usually require a large number of human-annotated data. For organ segmentation from volumetric medical images, human annotation is tedious and inefficient. To save human labour and to accelerate the training process, the strategy of annotation by iterative deep learning recently becomes popular in the research community. However, due to the lack of domain knowledge or efficient human-interaction tools, the current AID methods still suffer from long training time and high annotation burden.

Methods
We develop a contour-based annotation by iterative deep learning (AID) algorithm which uses boundary representation instead of voxel labels to incorporate high-level organ shape knowledge. We propose a contour segmentation network with a multi-scale feature extraction backbone to improve the boundary detection accuracy. We also developed a contour-based human-intervention method to facilitate easy adjustments of organ boundaries. By combining the contour-based segmentation network and the contour-adjustment intervention method, our algorithm achieves fast few-shot learning and efficient human proofreading.

Results
For validation, two human operators independently annotated four abdominal organs in computed tomography (CT) images using our method and two compared methods, i.e. a traditional contour-interpolation method and a state-of-the-art (SOTA) convolutional network (CNN) method based on voxel label representation. Compared to these methods, our approach considerably saved annotation time and reduced inter-rater variabilities. Our contour detection network also outperforms the SOTA nnU-Net in producing anatomically plausible organ shape with only a small training set.

Conclusion
Taking advantage of the boundary shape prior and the contour representation, our method is more efficient, more accurate and less prone to inter-operator variability than the SOTA AID methods for organ segmentation from volumetric medical images. The good shape learning ability and flexible boundary adjustment function make it suitable for fast annotation of organ structures with regular shape.


YSO-asiasanatsyväoppiminenalgoritmitlääketieteellinen tekniikka

Vapaat asiasanatmedical image annotation; deep learning; organ segmentation; interactive segmentation


Liittyvät organisaatiot


OKM-raportointiKyllä

Raportointivuosi2023

Alustava JUFO-taso1


Viimeisin päivitys 2024-26-03 klo 20:56