home
WelcomePublicationsContact
 

"Deep-Learning-based Detection and Segmentation of Vestibular Schwannoma: A Multi-Center and Multi-Vendor MRI Study"

Qian Tao, Stephan Romeijn, Olaf Neve, Nick de Boer, Willem Grootjans, Mark C. Kruit, Boudewijn P.F. Lelieveldt, Jeroen Jansen, Erik Hensen, Berit Verbist and Marius Staring

Abstract

PURPOSE: Accurate measurement of vestibular schwannoma (VS) is important for evaluation of VS progression and treatment planning. In clinical practice, linear measurements are manually performed from MRI. Manual measurement is time-consuming, subjective, and restricted to 2D planes. In this study, we aim to develop a deep learning convolutional neural network (CNN) model to automatically detect and segment VS in 3D from Gadolinium (Gd)-enhanced MRI.

METHOD AND MATERIALS: In total 124 patients with unilateral hearing loss referred to MRI exam were enrolled, including 84 VS-positive and 40 VS-negative cases. MRI data were acquired from 37 centers, by 12 different MRI scanners from 3 vendors. Typical image resolution was 0.35x0.35x1 mm, and field of view ranged from 130x130x24 mm to 270x270x188 mm. In 84 positive cases, VS was manually delineated by two observers, supervised by a senior radiologist. The 124 subjects were randomly divided into three non-overlapping sets: training set (N=72), validation set (N=18), and test set (N=34). We trained a 3D no-new-Unet CNN for both VS detection and segmentation. Training was performed on a NVIDIA Tesla V100 graphics processing unit with 16GB memory.

RESULTS:Applied to the test set, the CNN correctly detected VS in 24 subjects and excluded VS in 10 (sensitivity 100%, specificity 100%). We evaluated the Dice index, Hausdorff distance, and surface to surface (S2S) distance in two scenarios, namely, CNN vs. observer 1, and observer 1 vs. observer 2. No significant differences were found: Dice 0.91±0.06 vs 0.92±0.05 (p=0.5 by paired Wilcoxon test), Hausdorff 1.3±1.5mm vs. 1.2±1.0 mm (p=0.6), S2S 0.4±0.3 mm vs. 0.4±0.2 mm (p=0.9). The annotation time was 6.0±3.3 min for the observer and 2.5±2.8 min for the CNN model.

CONCLUSION: In a multi-center and multi-vendor setting, a CNN model can accurately detect and delineate VS in 3D from Gd-enhanced MRI, to faciliate diagnosis and measurement of VS in clinical practice.

 

Download

PDF (2 pages, 618 kB) click to start download

Copyright © 2020 by the authors. Published version © 2020 by . Personal use of this material is permitted. However, permission to reprint or republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the copyright holder.

 

BibTeX entry

@inproceedings{Tao:2020,
author = {Tao, Qian and Romeijn, Stephan and Neve, Olaf and de Boer, Nick and Grootjans, Willem and Kruit, Mark C. and Lelieveldt, Boudewijn P.F. and Jansen, Jeroen and Hensen, Erik and Verbist, Berit and Staring, Marius},
title = {Deep-Learning-based Detection and Segmentation of Vestibular Schwannoma: A Multi-Center and Multi-Vendor MRI Study},
booktitle = {RSNA},
address = {Chicago, USA},
month = {November},
year = {2020},
}

last modified: 16-09-2020 |webmaster |Copyright 2004-2024 © by Marius Staring