Inproceedings,

Time-Series Representation Learning via Heterogeneous Spatial-Temporal Contrasting for Remaining Useful Life Prediction

, , , , and .
International Conference on Pattern Recognition (ICPR), (2024)(accepted).

Abstract

Classical contrastive learning paradigms rely on manual augmentations like cropping, masking, dropping, or adding noise randomly to create divergent sample views from original data. However, the choice of which method to manipulate samples is often subjective and may destroy the latent pattern of the sample. In response, this paper introduces a novel contrastive learning paradigm without choosing sample view augmentation methods, termed Heterogeneous Spatial-Temporal Representation Contrasting (HSTRC). Instead of sample view augmentation, we employ dual branches with a heterogeneous spatial-temporal flipped structure to extract two distinct hidden feature views from the same source data, which avoids disturbing the original time series. Leveraging a combination of cross-branch spatial-temporal contrastive and projected feature contrastive loss functions, HSTRC can effectively extract robust representations from unlabeled time series data. Remarkably, by only fine-tuning the fully connected layers on top of extracted representations by HSTRC, we achieve the best performance across several Remaining Useful Life prediction datasets, showing up to 19.2% improvements over the state-of-the-art supervised learning methods and classical contrastive learning paradigms. Besides, further intensive experiments demonstrate HSTRC's effectiveness in active learning, out-of-distribution testing, and transfer learning scenarios.

Tags

Users

  • @ies

Comments and Reviews