You do not have permission to edit this page, for the following reason:
The action you have requested is limited to users in the group: Users.
Project description (free text)
Give a concise project description. Include:
When it comes to the vehicle industry, predicting when a component is going to fail has crucial consequences -- not only from a single component perspective but the failure of one component could lead to the failure of other components. These consequences could impose huge costs due to vehicles’ downtime and the damages they cause. There have been several successful AI-based approaches for predictive maintenance; alas, a crucial missing aspect is that their decisions are not intuitive and explainable for humans. Business owners rarely trust black-box type of model decisions. To build trust, there is a need for model decisions to be explainable. That is where explainable AI (XAI) comes into play. Explainable AI has been a growing field in AI and is especially of great importance when the decisions made by the models are critical or costly. To make the models transparent to humans, two types of XAI have come into existence. One category is the models which are inherently explainable, like linear regression and decision trees. The other category includes more black-box and complex types of models like deep neural networks. There is a tradeoff between explainability and model complexity. The explainability of the first category comes at the cost of lower model performance. Therefore, the necessity for making complex models more transparent is obvious. In this thesis, the main idea is to deploy state-of-the-art methods to make the decision made by complex models for predictive maintenance explainable.
Summary:
This is a minor edit Watch this page
Cancel
Home
Research
Education
Partners
People
Contact