• norsk
    • English
  • norsk 
    • norsk
    • English
  • Logg inn
Vis innførsel 
  •   Hjem
  • Norges miljø- og biovitenskapelige universitet
  • Faculty of Science and Technology (RealTek)
  • Master's theses (RealTek)
  • Vis innførsel
  •   Hjem
  • Norges miljø- og biovitenskapelige universitet
  • Faculty of Science and Technology (RealTek)
  • Master's theses (RealTek)
  • Vis innførsel
JavaScript is disabled for your browser. Some features of this site may not work without it.

Exploring Breast Cancer Diagnosis: A Study of SHAP and LIME in XAI-Driven Medical Imaging

Husby, Ulrik Egge
Master thesis
Thumbnail
Åpne
no.nmbu:wiseflow:7110333:59110536.pdf (3.619Mb)
Permanent lenke
https://hdl.handle.net/11250/3147980
Utgivelsesdato
2024
Metadata
Vis full innførsel
Samlinger
  • Master's theses (RealTek) [2009]
Sammendrag
The motivation for this thesis is to enhance the interpretability and explainability of using \ac{ai} in healthcare, focusing on breast cancer images. Breast cancer is one of the leading causes of cancer-related deaths among women, making early detection and accurate diagnosis important to reduce mortality. To increase interpretability, explainability and trust of \ac{ai} in healthcare, two \ac{xai} techniques \ac{shap} and \ac{lime} are explored, and used to explain the underlying model EfficientNetV2B2. By employing metrics such as Intersect over Union (IoU), Precision, Recall and F1-score, this thesis evaluated the performance of these techniques in accurately identifying and localizing tumor regions in breast cancer images.

Through methodological insights, this thesis highlight that both SHAP and LIME enhanced the transparancy and interpretability of \ac{ai} models, which is a crucial requirement in healthcare. They allowed for a detailed breakdown of decisions made by the underlying model by highlighting important features in images, contributing to a deeper understanding and trust in \ac{ai} decisions. However, both techniques faced challenges such as computational complexity and inconsistency in performance, which limited their practical application.

The results indicated that SHAP generally provided higher precision than LIME, suggesting its useability in applications where reducing false positives is critical, which again could be useful in early diagnosis when capturing all positives is important. On the other hand LIME provided higher recall than SHAP, which could be essential in scenarios where reducing false negatives is vital. Reducing false negatives is essential in medical diagnosis since this can have fatal consequences for patients if a region is classified as non-cancerous while in reality it is cancerous.

The thesis underscores the potential of XAI to improve the interpretability and trust in \ac{ai} models, especially in healthcare, as well as aiding in early diagnosis, which can result in higher survival rates when assessing breast cancer. Despite the variability in the techniques' performance, the ability of SHAP and LIME to provide visual and intuitive insights into model decisions marks a significant step towards integrating XAI techniques in critical healthcare applications. This study contributes to the ongoing focus on the need for trustable and interpretable AI models, suggesting areas for further research and development in XAI techniques.
 
 
 
Utgiver
Norwegian University of Life Sciences

Kontakt oss | Gi tilbakemelding

Personvernerklæring
DSpace software copyright © 2002-2019  DuraSpace

Levert av  Unit
 

 

Bla i

Hele arkivetDelarkiv og samlingerUtgivelsesdatoForfattereTitlerEmneordDokumenttyperTidsskrifterDenne samlingenUtgivelsesdatoForfattereTitlerEmneordDokumenttyperTidsskrifter

Min side

Logg inn

Statistikk

Besøksstatistikk

Kontakt oss | Gi tilbakemelding

Personvernerklæring
DSpace software copyright © 2002-2019  DuraSpace

Levert av  Unit