Vis enkel innførsel

dc.contributor.advisorFadi Al Machot
dc.contributor.advisorHabib Ullah
dc.contributor.authorHusby, Ulrik Egge
dc.date.accessioned2024-08-23T16:28:44Z
dc.date.available2024-08-23T16:28:44Z
dc.date.issued2024
dc.identifierno.nmbu:wiseflow:7110333:59110536
dc.identifier.urihttps://hdl.handle.net/11250/3147980
dc.description.abstractThe motivation for this thesis is to enhance the interpretability and explainability of using \ac{ai} in healthcare, focusing on breast cancer images. Breast cancer is one of the leading causes of cancer-related deaths among women, making early detection and accurate diagnosis important to reduce mortality. To increase interpretability, explainability and trust of \ac{ai} in healthcare, two \ac{xai} techniques \ac{shap} and \ac{lime} are explored, and used to explain the underlying model EfficientNetV2B2. By employing metrics such as Intersect over Union (IoU), Precision, Recall and F1-score, this thesis evaluated the performance of these techniques in accurately identifying and localizing tumor regions in breast cancer images. Through methodological insights, this thesis highlight that both SHAP and LIME enhanced the transparancy and interpretability of \ac{ai} models, which is a crucial requirement in healthcare. They allowed for a detailed breakdown of decisions made by the underlying model by highlighting important features in images, contributing to a deeper understanding and trust in \ac{ai} decisions. However, both techniques faced challenges such as computational complexity and inconsistency in performance, which limited their practical application. The results indicated that SHAP generally provided higher precision than LIME, suggesting its useability in applications where reducing false positives is critical, which again could be useful in early diagnosis when capturing all positives is important. On the other hand LIME provided higher recall than SHAP, which could be essential in scenarios where reducing false negatives is vital. Reducing false negatives is essential in medical diagnosis since this can have fatal consequences for patients if a region is classified as non-cancerous while in reality it is cancerous. The thesis underscores the potential of XAI to improve the interpretability and trust in \ac{ai} models, especially in healthcare, as well as aiding in early diagnosis, which can result in higher survival rates when assessing breast cancer. Despite the variability in the techniques' performance, the ability of SHAP and LIME to provide visual and intuitive insights into model decisions marks a significant step towards integrating XAI techniques in critical healthcare applications. This study contributes to the ongoing focus on the need for trustable and interpretable AI models, suggesting areas for further research and development in XAI techniques.
dc.description.abstract
dc.languageeng
dc.publisherNorwegian University of Life Sciences
dc.titleExploring Breast Cancer Diagnosis: A Study of SHAP and LIME in XAI-Driven Medical Imaging
dc.typeMaster thesis


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel