You do not have permission to edit this page, for the following reason:
The action you have requested is limited to users in the group: Users.
Project description (free text)
Give a concise project description. Include:
Time series (TS) data is pervasive across various fields, including finance, healthcare, and energy, often characterized by its sequential nature and temporal dependencies. However, traditional machine learning models face challenges in effectively capturing the complex patterns inherent in time series data. Foundation models, pre-trained on large-scale datasets with deep representational capabilities, offer a promising solution to address these challenges in time series tasks. This thesis aims to explore the potential of foundation models for time series analysis, investigating how their powerful generalization abilities can be adapted to capture temporal relationships and improve performance across tasks such as forecasting, anomaly detection, and classification. A special focus of this research is designing methods, e.g. curriculum learning (CL), that fine-tune the foundation models for specific time series tasks while maintaining their broad generalization capabilities. Key areas of investigation include adapting transformer-based foundation models to handle the sequential structure of time series data, incorporating temporal attention mechanisms, and developing transfer learning strategies to use pre-trained models in different time series domains. Through experimental evaluations on diverse time series datasets, this study seeks to demonstrate the advantages of using foundation models for time series tasks and propose techniques for optimizing their performance. Work packages - Conduct a literature review on time series foundation models (TSFM), e.g. transformer-based, non-transformer-based (MLPs, RNNs, CNNs), and diffusion-based models. - Evaluate and develop fine-tuning strategies, e.g. CL, of promising methods for improved performance on several specific time series applications - Investigate and explore the interpretability of the models
Summary:
This is a minor edit Watch this page
Cancel
Home
Research
Education
Partners
People
Contact