%0 Journal Article %T Temporal-spatial cross attention network for recognizing imagined characters. %A Xu M %A Zhou W %A Shen X %A Qiu J %A Li D %J Sci Rep %V 14 %N 1 %D 2024 07 4 %M 38965248 %F 4.996 %R 10.1038/s41598-024-59263-5 %X Previous research has primarily employed deep learning models such as Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs) for decoding imagined character signals. These approaches have treated the temporal and spatial features of the signals in a sequential, parallel, or single-feature manner. However, there has been limited research on the cross-relationships between temporal and spatial features, despite the inherent association between channels and sampling points in Brain-Computer Interface (BCI) signal acquisition, which holds significant information about brain activity. To address the limited research on the relationships between temporal and spatial features, we proposed a Temporal-Spatial Cross-Attention Network model, named TSCA-Net. The TSCA-Net is comprised of four modules: the Temporal Feature (TF), the Spatial Feature (SF), the Temporal-Spatial Cross (TSCross), and the Classifier. The TF combines LSTM and Transformer to extract temporal features from BCI signals, while the SF captures spatial features. The TSCross is introduced to learn the correlations between the temporal and spatial features. The Classifier predicts the label of BCI data based on its characteristics. We validated the TSCA-Net model using publicly available datasets of handwritten characters, which recorded the spiking activity from two micro-electrode arrays (MEAs). The results showed that our proposed TSCA-Net outperformed other comparison models (EEG-Net, EEG-TCNet, S3T, GRU, LSTM, R-Transformer, and ViT) in terms of accuracy, precision, recall, and F1 score, achieving 92.66 % , 92.77 % , 92.70 % , and 92.58 % , respectively. The TSCA-Net model demonstrated a 3.65 % to 7.49 % improvement in accuracy over the comparison models.