Lakehead University Library Logo
    • Login
    View Item 
    •   Knowledge Commons Home
    • Electronic Theses and Dissertations
    • Electronic Theses and Dissertations from 2009
    • View Item
    •   Knowledge Commons Home
    • Electronic Theses and Dissertations
    • Electronic Theses and Dissertations from 2009
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.
    quick search

    Browse

    All of Knowledge CommonsCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsDisciplineAdvisorCommittee MemberThis CollectionBy Issue DateAuthorsTitlesSubjectsDisciplineAdvisorCommittee Member

    My Account

    Login

    Feature extraction enhances model performance

    Thumbnail
    View/Open
    WangX2025m-1a.pdf (39.58Mb)
    Date
    2025
    Author
    Wang, Xiaofan
    Metadata
    Show full item record
    Abstract
    Deep learning has emerged as a prominent approach in traditional machine learning paradigms due to its superior capability for deep-level feature extraction. This, in turn, demonstrates that the efficiency, depth, and richness of feature extraction have a profound impact on model performance. Features serve as key characteristics for distinguishing objects and represent dimensionality-reduced representations of data. This paper proposes two effective models applied to EEG emotion recognition and NL2SQL tasks, respectively, which enhance model performance through optimized feature extraction. In previous models for processing EEG signals, researchers have typically focused on only partial features of EEG while rarely integrating these features comprehensively. To address this limitation, we designed a multi-feature extraction method that improves performance by extracting and combining frequency, spatial, temporal, and global features from EEG signals. We conducted extensive experiments on the SEED and DEAP datasets, generating confusion matrices, t-SNE distributions, and brain region activation heatmaps to demonstrate the effectiveness of our model. Additionally, our method incorporates an adaptive GCN that eliminates the requirement for pre-defined adjacency matrices. For the NL2SQL task, unlike traditional models that train from scratch, we designed a framework based on fine-tuning pre-trained BERT and conducted experiments on theWikiSQL, Academic, and Spider datasets. The results demonstrate that our model achieves superior performance compared to traditional models in clause prediction and exhibits stronger generalization capabilities, indicating that the prior knowledge embedded in pretrained models also benefits the model’s feature extraction capacity.
    URI
    https://knowledgecommons.lakeheadu.ca/handle/2453/5468
    Collections
    • Electronic Theses and Dissertations from 2009 [1635]

    Lakehead University Library
    Contact Us | Send Feedback

     

     


    Lakehead University Library
    Contact Us | Send Feedback