Concept Re-identification to Explain Online Continual Learning

From ISLAB/CAISR
Title Concept Re-identification to Explain Online Continual Learning
Summary This project aims to apply techniques from recurrent concept drifts to explain the predictions of Online Continual Learning methods.
Keywords
TimeFrame Fall 2024
References
Prerequisites Good knowledge of data stream learning
Author
Supervisor Sepideh Pashami, Nuwan Gunasekara
Level Master
Status Open


Online Continual Learning enables Neural Networks to learn continuously [1] as data becomes available. In particular, it helps to alleviate catastrophic forgetting when faced with shifts in the underlying data stream. It is important to track concepts during the online continual learning process, as old concepts may reappear. In this regard, ideas from streaming techniques [1] can be applied to further enhance Online Continual Learning by focusing on understanding existing concepts in the data stream. Specifically, the learning system could utilize concept re-identification through meta-features [2] and similarity scores [2] to recognize concepts during prediction [1]. Once a concept is identified, example-based explanation techniques [3] could be used to explain the similarities and differences to the identified concept.


References

1. Gunasekara, N., Pfahringer, B., Gomes, H.M., Bifet, A.: Survey on online streaming continual learning. In: Elkind, E. (ed.) Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI-23. pp. 6628–6637. International Joint Conferences on Artificial Intelligence Organization (8 2023). https://doi.org/10.24963/ijcai.2023/743, https://doi.org/10.24963/ijcai.2023/743, survey Track

2. Halstead, B., Koh, Y.S., Riddle, P., Pechenizkiy, M., Bifet, A.: Combining diverse meta-features to accurately identify recurring concept drift in data streams. ACM Trans. Knowl. Discov. Data 17(8) (May 2023). https://doi.org/10.1145/3587098, https://doi.org/10.1145/3587098

3. Van der Waa, J., Nieuwburg, E., Cremers, A., Neerincx, M.: Evaluating xai: A comparison of rule-based and example-based explanations. Artificial Intelligence 291, 103404 (2021). https://doi.org/https://doi.org/10.1016/j.artint.2020.103404, https://www.sciencedirect.com/science/article/pii/S0004370220301533