Please note: We are currently experiencing some performance issues across the site, and some pages may be slow to load. We are working on restoring normal service soon. Importing new articles from Word documents is also currently unavailable. We apologize for any inconvenience.

Chiyuan He

and 6 more

The rapid development of wearable sensors promotes convenient data collection in human daily life. Human Activity Recognition (HAR), as a prominent research direction for wearable applications, has made remarkable progress in recent years. However, existing efforts mostly focus on improving recognition accuracy, paying limited attention to the model’s functional scalability, specifically its ability for continual learning. This limitation greatly restricts its application in open-world scenarios. Moreover, due to storage and privacy concerns, it is often impractical to retain the activity data of different users for subsequent tasks, especially egocentric visual information. Furthermore, the imbalance between visual-based and inertial-measurement-unit (IMU) sensing modality introduces challenges of lack of generalization when employing conventional continual learning techniques. In this paper, we propose a motivational learning scheme to address the limited generalization caused by the modal imbalance, enabling foreseeable generalization in a visual-IMU multimodal network. To overcome forgetting, we introduce a robust representation estimation technique and a pseudo-representation generation strategy for continual learning. Experimental results on the egocentric multimodal activity dataset UESTC-MMEA-CL demonstrate the effectiveness of our proposed method. Furthermore, our method effectively leverages the generalization capabilities of IMU-based modal representations, outperforming general and state-of-the-art continual learning methods in various task settings.