Difference between revisions of "Explainable Decision Forest"

From ISLAB/CAISR
Line 1: Line 1:
 
{{StudentProjectTemplate
 
{{StudentProjectTemplate
|Summary=Designing an explainable decision forest classifier for fault detection
+
|Summary=Designing a deep model that uses decision trees instead of artificial neurons
|Keywords=decision forest, explainable AI, fault detection
+
|Keywords=deep decision forest, explainable AI
|TimeFrame=Autumn 2023
+
|TimeFrame=Autumn 2024
|Supervisor=Hamid Sarmadi, Sepideh Pashami, Sławomir Nowaczyk,  
+
|Supervisor=Sławomir Nowaczyk,  
 
|Status=Open
 
|Status=Open
 
}}
 
}}
An algorithm to train separate "explainable" decision trees for detecting different types of fault has been developed. We would like to extend the algorithm to an ensemble (Decision Forest) method when decision trees are aware of each other.
+
The success of Deep Learning is largely attributed to the ability to create/extract "hierarchical features" from the data. This is successful using artificial neurons or perceptrons, and the backpropagation algorithm.
  
You can read more about the original algorithm in the following link: https://hhse-my.sharepoint.com/:b:/g/personal/hamid_sarmadi_hh_se/EWlwNDHNrnNMqBQ93dNdu9kBS61tWwF56a-rI7A-kPEpRA?e=EpEFQn
+
The price is, however, the very large size of the model, which translates into computational costs and a "black-box" nature, or lack of explainability.
 +
 
 +
This project aims to explore ways to train a deep model using a chain of decision trees, like layers in a neural network. It promises to significantly reduce model complexity and increase interpretability.

Revision as of 21:16, 24 August 2024

Title Explainable Decision Forest
Summary Designing a deep model that uses decision trees instead of artificial neurons
Keywords deep decision forest, explainable AI
TimeFrame Autumn 2024
References
Prerequisites
Author
Supervisor Sławomir Nowaczyk
Level
Status Open


The success of Deep Learning is largely attributed to the ability to create/extract "hierarchical features" from the data. This is successful using artificial neurons or perceptrons, and the backpropagation algorithm.

The price is, however, the very large size of the model, which translates into computational costs and a "black-box" nature, or lack of explainability.

This project aims to explore ways to train a deep model using a chain of decision trees, like layers in a neural network. It promises to significantly reduce model complexity and increase interpretability.