A1 Journal article (refereed)
Annotation-efficient training of medical image segmentation network based on scribble guidance in difficult areas (2024)


Zhuang, M., Chen, Z., Yang, Y., Kettunen, L., & Wang, H. (2024). Annotation-efficient training of medical image segmentation network based on scribble guidance in difficult areas. International Journal of Computer Assisted Radiology and Surgery, 19(1), 87-96. https://doi.org/10.1007/s11548-023-02931-0


JYU authors or editors


Publication details

All authors or editorsZhuang, Mingrui; Chen, Zhonghua; Yang, Yuxin; Kettunen, Lauri; Wang, Hongkai

Journal or seriesInternational Journal of Computer Assisted Radiology and Surgery

ISSN1861-6410

eISSN1861-6429

Publication year2024

Publication date26/05/2023

Volume19

Issue number1

Pages range87-96

PublisherSpringer

Publication countryGermany

Publication languageEnglish

DOIhttps://doi.org/10.1007/s11548-023-02931-0

Publication open accessNot open

Publication channel open access


Abstract

Purpose: The training of deep medical image segmentation networks usually requires a large amount of human-annotated data. To alleviate the burden of human labor, many semi- or non-supervised methods have been developed. However, due to the complexity of clinical scenario, insufficient training labels still causes inaccurate segmentation in some difficult local areas such as heterogeneous tumors and fuzzy boundaries.
Methods: We propose an annotation-efficient training approach, which only requires scribble guidance in the difficult areas. A segmentation network is initially trained with a small amount of fully annotated data and then used to produce pseudo labels for more training data. Human supervisors draw scribbles in the areas of incorrect pseudo labels (i.e., difficult areas), and the scribbles are converted into pseudo label maps using a probability-modulated geodesic transform. To reduce the influence of the potential errors in the pseudo labels, a confidence map of the pseudo labels is generated by jointly considering the pixel-to-scribble geodesic distance and the network output probability. The pseudo labels and confidence maps are iteratively optimized with the update of the network, and the network training is promoted by the pseudo labels and the confidence maps in turn.
Results: Cross-validation based on two data sets (brain tumor MRI and liver tumor CT) showed that our method significantly reduces the annotation time while maintains the segmentation accuracy of difficult areas (e.g., tumors). Using 90 scribbleannotated training images (annotated time: ~ 9 h), our method achieved the same performance as using 45 fully annotated images (annotation time: > 100 h) but required much shorter annotation time.
Conclusion: Compared to the conventional full annotation approaches, the proposed method significantly saves the annotation efforts by focusing the human supervisions on the most difficult regions. It provides an annotation-efficient way for training medical image segmentation networks in complex clinical scenario.


Keywordsimagingmagnetic resonance imagingcomputed tomographysegmentationannotationmachine learningdeep learning

Free keywordsmedical image annotation; deep learning; organ segmentation; interactive segmentation


Contributing organizations


Ministry reportingYes

Reporting Year2023

JUFO rating1


Last updated on 2024-03-07 at 00:06