Abstract
This paper proposes an innovative framework termed as FedCSG which has been designed to enhance graph learning on resource-limited devices. By leveraging federated learning and split learning techniques, FedCSG enables devices to collaboratively train models without sharing raw data, ensuring privacy and security. The novelty of the continual learning aspect allows the framework to adapt to new data over time, maintaining high performance even as the underlying data distribution changes. This makes FedCSG particularly suitable for applications in dynamic environments where devices have limited computational resources and need to process graph-structured data efficiently. The performance effectiveness of FedCSG is validated on graph datasets and experiments show that the proposed framework is superior to other state-of-the-art (SOTA) approaches.