Explainable AI for predictive maintenance in collaboration with Volvo

From ISLAB/CAISR
Revision as of 15:48, 27 September 2022 by Islab (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Title Explainable AI for predictive maintenance in collaboration with Volvo
Summary Developing explainable models for predicting components failures of Volvo trucks
Keywords Explainable AI, Predictive Maintenance, Post-hoc Explanation
TimeFrame Fall 2021
References 1- Adadi, Amina, and Mohammed Berrada. "Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)." IEEE access 6 (2018): 52138-52160.

2- Arrieta, Alejandro Barredo, et al. "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI." Information Fusion 58 (2020): 82-115.

3- https://dl.acm.org/doi/abs/10.1145/3292500.3332281

4- Cortez, Paulo, and Mark J. Embrechts. "Using sensitivity analysis and visualization techniques to open black box data mining models." Information Sciences 225 (2013): 1-17.

5- Gilpin, Leilani H., et al. "Explaining explanations: An overview of interpretability of machine learning." 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 2018.

Prerequisites Artificial Intelligence, Data Mining, and Learning Systems courses; good knowledge of machine learning and neural networks; programming skills for implementing machine learning algorithms
Author
Supervisor Mahmoud Rahat, Peyman Mashhadi
Level Master
Status Draft


When it comes to the vehicle industry, predicting when a component is going to fail has crucial consequences -- not only from a single component perspective but the failure of one component could lead to the failure of other components. These consequences could impose huge costs due to vehicles’ downtime and the damages they cause. There have been several successful AI-based approaches for predictive maintenance; alas, a crucial missing aspect is that their decisions are not intuitive and explainable for humans.

Business owners rarely trust black-box type of model decisions. To build trust, there is a need for model decisions to be explainable. That is where explainable AI (XAI) comes into play. Explainable AI has been a growing field in AI and is especially of great importance when the decisions made by the models are critical or costly.

To make the models transparent to humans, two types of XAI have come into existence. One category is the models which are inherently explainable, like linear regression and decision trees. The other category includes more black-box and complex types of models like deep neural networks. There is a tradeoff between explainability and model complexity. The explainability of the first category comes at the cost of lower model performance. Therefore, the necessity for making complex models more transparent is obvious. In this thesis, the main idea is to deploy state-of-the-art methods to make the decision made by complex models for predictive maintenance explainable.