Vis enkel innførsel

dc.contributor.advisorFutsæther, Cecilia Marie
dc.contributor.authorMoe, Yngve Mardal
dc.date.accessioned2019-05-12T12:21:31Z
dc.date.available2019-05-12T12:21:31Z
dc.date.issued2019
dc.identifier.urihttp://hdl.handle.net/11250/2597305
dc.description.abstractPurpose: The delineation of tumours and malignant lymph nodes in medical images is an essential part of radiotherapy. However, it is both time-consuming and prone to inter-observer variability. Automating this process is therefore beneficial as it will reduce the time effort of radiotherapy planning and the inter-observer variability. One method of automating delineation is by using neural networks. Deep learning experiments, however, requre tuning of a vast amount of parameters. Thus, a systematic methodology for conducting such experiments is vital to ensure reproducibility. This thesis will introduce the theory of deep learning and present the SciNets library, a framework for rapid model prototyping with guaranteed reproducibility. This framework was used to develop a model for automatic delineation of gross tumour volume and malignant lymph nodes in the head and neck region. Methods: The SciNets library (available at https://github.com/yngvem/scinets/)is a Python library that creates and trains deep learning models parametrised by a series of JSON files containing model hyperparameters. Furthermore, an extensive visualisation suite is included to inspect the training process. This library was used to assess the applicability of neural networks for automatic tumour delineation. The dataset consisted of medical scans taken of 197 patients who recieved treatment at the Oslo University Hospital, The Radium Hospital. 18F-FDG-PET co-registered to contrast-enhanced CT scans (i.e. contrast-enhanced PET/CT scans) were available for all patients. The image dataset was split into a training set (142 patients), a validation set (15 patients) and a test set (40 patients), stratified by tumour stage. A vast parameter sweep was performed on this dataset. All tested models were based on the U-Net architecture. Both the Cross Entropy and dice loss were tested, as well as the novel F2 and F4 loss introduced herein. Channel dropping and Hounsfield windowing were used for preprocessing, with varying window centres and widths. Both Adam and SGDR+momentum were tested to optimise the loss. Furthermore, Improved ResNet layer types were tested against standard convolutional layers. Models were compared based on the average dice per image slice in the validation set. Only the highest performing models utilising only CT information, only PET information and both PET and CT information were used to delineate the test set. The sensitivity (sens), specificity (spec), positive predictive value (PPV) and dice score were computed for these models. Additional analysis was performed on the highest performing model utilising only CT information and the highest performing model utilising both PET and CT information. Ground truth and predicted delineations were visualised for a subset of the patients in the validation and test set for these models. Results: The parameter sweep consisted of over 150 different parameter combinations and showed that using the newly introduced F2 and F4 loss provided a notable increase in performance compared to the Cross Entropy and dice loss. Furthermore, Hounsfield windowing yielded a systematic increase in performance; however, the choice of window centre and width did not yield any noticeable difference. There was no difference between the Adam optimiser and SGDR+momentum optimiser on either performance or training time. However, using a too low learning rate with the Adam optimiser resulted in poor performance on out of sample data (i.e. validation set). Models utilising ResNet layers experienced exploding gradients on the skip connections and did not converge. The highest performing PET/CT model (Dice: 0.66, Sens: 0.79, Spec: 0.99, PPV: 0.62) achieved higher overall performance compared to PET-only models (Dice: 0.64, Sens: 0.69, Spec: 0.99, PPV: 0.64) or CT-only models (Dice: 0.56, Sens: 0.58, Spec: 0.99, PPV: 0.62). Conclusion: We have demonstrated that deep learning is a promising avenue for automatic delineation of regions of interest in medical images. The SciNets library was used to conduct a systematic and reproducible parameter sweep for automatic delineation of tumours and malignant lymph nodes in patients with head and neck cancer. This parameter sweep yielded a recommended set of hyperparameters for similar experiments as well as recommendations for further exploration. The dice performance of both the PET/CT and CT-only model is similar to that expected between two radiologists. We can, however, not conclude that the automatically generated segmentation maps are of similar quality as to those generated by radiologists. The dice coefficient does not discern the severity of mistakes, only the percentage of overlap between the predicted delineation maps and the ground truth. Oncologists should, therefore, be consulted when assessing the quality of delineation masks in future experiments.nb_NO
dc.language.isoengnb_NO
dc.publisherNorwegian University of Life Sciences, Åsnb_NO
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/deed.no*
dc.subjectMachine learningnb_NO
dc.subjectOncologynb_NO
dc.subjectTumour delineationnb_NO
dc.subjectTumor delineationnb_NO
dc.subjectImage segmentationnb_NO
dc.titleDeep learning for automatic delineation of tumours from PET/CT imagesnb_NO
dc.typeMaster thesisnb_NO
dc.subject.nsiVDP::Mathematics and natural science: 400nb_NO
dc.description.localcodeM-MRnb_NO


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal