Abstract
Recent advancements in machine learning and deep learning have increased interest in synthetic aperture radar (SAR) images. This paper proposes HACGNet, a hybrid approach combining convolutional neural network (CNN) and graph convolutional network (GCN) with pixel and superpixel-level feature fusion, and an attention mechanism for SAR image classification. HACGNet includes a spatial attention mechanism, CNN, and GCN. The CNN works on individual pixel nodes to extract local features. The GCN operates on superpixel-based nodes, representing images as graphs to enhance image structure understanding and reduce computational load. The spatial attention mechanism helps the model focus on relevant parts of the SAR image. The model uses graph encoder and decoder architectures to exchange features between image pixels and graph nodes, capturing both local and global features. Evaluation on two datasets using overall accuracy (OA), average accuracy (AA), and Kappa coefficient show that HACGNet outperforms other state-of-the-art methods.