A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä
Efficient contour-based annotation by iterative deep learning for organ segmentation from volumetric medical images (2023)
Zhuang, M., Chen, Z., Wang, H., Tang, H., He, J., Qin, B., Yang, Y., Jin, X., Yu, M., Jin, B., Li, T., & Kettunen, L. (2023). Efficient contour-based annotation by iterative deep learning for organ segmentation from volumetric medical images. International Journal of Computer Assisted Radiology and Surgery, 18(2), 379-394. https://doi.org/10.1007/s11548-022-02730-z
JYU-tekijät tai -toimittajat
Julkaisun tiedot
Julkaisun kaikki tekijät tai toimittajat: Zhuang, Mingrui; Chen, Zhonghua; Wang, Hongkai; Tang, Hong; He, Jiang; Qin, Bobo; Yang, Yuxin; Jin, Xiaoxian; Yu, Mengzhu; Jin, Baitao; et al.
Lehti tai sarja: International Journal of Computer Assisted Radiology and Surgery
ISSN: 1861-6410
eISSN: 1861-6429
Julkaisuvuosi: 2023
Ilmestymispäivä: 01.09.2022
Volyymi: 18
Lehden numero: 2
Artikkelin sivunumerot: 379-394
Kustantaja: Springer
Julkaisumaa: Saksa
Julkaisun kieli: englanti
DOI: https://doi.org/10.1007/s11548-022-02730-z
Julkaisun avoin saatavuus: Avoimesti saatavilla
Julkaisukanavan avoin saatavuus: Osittain avoin julkaisukanava
Julkaisu on rinnakkaistallennettu (JYX): https://jyx.jyu.fi/handle/123456789/82925
Tiivistelmä
Training deep neural networks usually require a large number of human-annotated data. For organ segmentation from volumetric medical images, human annotation is tedious and inefficient. To save human labour and to accelerate the training process, the strategy of annotation by iterative deep learning recently becomes popular in the research community. However, due to the lack of domain knowledge or efficient human-interaction tools, the current AID methods still suffer from long training time and high annotation burden.
Methods
We develop a contour-based annotation by iterative deep learning (AID) algorithm which uses boundary representation instead of voxel labels to incorporate high-level organ shape knowledge. We propose a contour segmentation network with a multi-scale feature extraction backbone to improve the boundary detection accuracy. We also developed a contour-based human-intervention method to facilitate easy adjustments of organ boundaries. By combining the contour-based segmentation network and the contour-adjustment intervention method, our algorithm achieves fast few-shot learning and efficient human proofreading.
Results
For validation, two human operators independently annotated four abdominal organs in computed tomography (CT) images using our method and two compared methods, i.e. a traditional contour-interpolation method and a state-of-the-art (SOTA) convolutional network (CNN) method based on voxel label representation. Compared to these methods, our approach considerably saved annotation time and reduced inter-rater variabilities. Our contour detection network also outperforms the SOTA nnU-Net in producing anatomically plausible organ shape with only a small training set.
Conclusion
Taking advantage of the boundary shape prior and the contour representation, our method is more efficient, more accurate and less prone to inter-operator variability than the SOTA AID methods for organ segmentation from volumetric medical images. The good shape learning ability and flexible boundary adjustment function make it suitable for fast annotation of organ structures with regular shape.
YSO-asiasanat: syväoppiminen; algoritmit; lääketieteellinen tekniikka
Vapaat asiasanat: medical image annotation; deep learning; organ segmentation; interactive segmentation
Liittyvät organisaatiot
OKM-raportointi: Kyllä
Raportointivuosi: 2023
Alustava JUFO-taso: 1