Explainable AI by Training Introspection

From ISLAB/CAISR
Revision as of 09:41, 3 October 2022 by Jens (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Title Explainable AI by Training Introspection
Summary Research and development of novel XAI methods based on training process information
Keywords XAI, Neural Networks
TimeFrame
References
Prerequisites
Author
Supervisor Jens Lundström, Peyman Mashhadi, Amira Soliman, Atiye Sadat Hashemi
Level Master
Status Open


As machine learning has become increasingly successful in commercial applications during the last decades, the demand for model explainability and interpretability also emerge. In many occasions, for a decision support system to be credible and useful the predicted decision support needs to follow with explainability. This need has sparked enormous activity in the field of Explainable AI (XAI) both for the industry and in AI/ML research for a couple of years. The focus of current XAI methods aims at utilizing the end result of the training process, i.e. the final trained model. In the master thesis we explore the hypothesized potential of XAI to be revealed by exploring the full trajectory of the model training process. The thesis will explore different data modalities, types of models and explainability aspects.