Feature extraction enhances model performance
Abstract
Deep learning has emerged as a prominent approach in traditional machine learning paradigms due to its superior capability for deep-level feature extraction. This, in turn, demonstrates that the efficiency, depth, and richness of feature extraction have a profound impact on model performance. Features serve as key characteristics for distinguishing objects and represent dimensionality-reduced representations of data. This paper proposes two effective models applied to EEG emotion recognition and NL2SQL tasks, respectively,
which enhance model performance through optimized feature extraction.
In previous models for processing EEG signals, researchers have typically focused on only partial features of EEG while rarely integrating these features comprehensively. To address this limitation, we designed a multi-feature extraction method that improves performance by extracting and combining frequency, spatial, temporal, and global features from EEG signals. We conducted extensive experiments on the SEED and DEAP datasets, generating confusion matrices, t-SNE distributions, and brain region activation heatmaps
to demonstrate the effectiveness of our model. Additionally, our method incorporates an adaptive GCN that eliminates the requirement for pre-defined adjacency matrices.
For the NL2SQL task, unlike traditional models that train from scratch, we designed a framework based on fine-tuning pre-trained BERT and conducted experiments on theWikiSQL, Academic, and Spider datasets. The results demonstrate that our model achieves superior performance compared to traditional models in clause prediction and exhibits stronger generalization capabilities, indicating that the prior knowledge embedded in pretrained models also benefits the model’s feature extraction capacity.