Trustworthy AI: Robust and Explainable Time Series Analysis
The School of EECS is hosting the following the PhD progress review 3 seminar:
Trustworthy AI: Robust and Explainable Time Series Analysis
Speaker: Kun Han
Host: Dr Miao Xu
Abstract:
The growing dependence on time series data in critical domains such as cybersecurity, healthcare, and finance underscores the necessity of developing trustworthy artificial intelligence (AI) systems characterized by robustness, interpretability, and adaptability to real-world complexities. This thesis investigates three fundamental research questions essential for advancing such systems in time series analysis: (1) How can data format irregularities in time series be effectively handled? (2) How can AI systems robustly learn from weak supervision, particularly given low-quality human involvement? (3) How can models be designed to deliver interpretable and actionable insights?
To address these questions, this work integrates advanced methodologies from graph neural networks (GNNs), multiple instance learning (MIL), attention mechanisms, and risk estimation, establishing a comprehensive framework for robust and explainable time series analysis. Firstly, we propose a dynamic graph neural network augmented with instance-level attention mechanisms, specifically tailored for real-time analysis of irregular multivariate time series, effectively addressing common issues such as misalignment and missing values.
Secondly, we introduce a novel risk estimation method to manage noisy labels and a Positive-Unlabeled (PU) learning framework aimed at facilitating reliable early detection under conditions of weak supervision. Lastly, we develop two interpretable frameworks—Hierarchical Interpretable Time Series (HITS) and PatchMIL—that leverage hierarchical multiple instance learning and attention-based mechanisms to elucidate variable contributions and temporal patterns, thereby generating actionable insights. Collectively, these contributions significantly enhance the robustness, transparency, and practicality of AI systems for time series analysis, fostering trustworthy decision-making in dynamic and complex real-world applications.
Bio: Kun Han earned his Master’s degree in Computer Science from the University of Queensland in 2021. Currently, he is a PhD candidate under the joint supervision of Professor Ryan Ko, Dr Miao Xu, Dr Abigail Koay, and Dr Weitong Chen. His research interests include machine learning, time series analysis, and weakly supervised learning, emphasizing the development of trustworthy artificial intelligence solutions capable of addressing complexities and irregularities in real-world data.
About Data Science Seminar
This seminar series is hosted by EECS Data Science.