Please use this identifier to cite or link to this item: https://knowledgecommons.lakeheadu.ca/handle/2453/5459
Title: Table Extraction with Table Data Using VGG-19 Deep Learning Model
Authors: Iqbal, Muhammad Zahid
Garg, Nitish
Ahmed, Saad Bin
Keywords: table extraction model;information extraction;convolutional neural network;deep neural network
Issue Date: 1-Jan-2025
Publisher: MDPI
Citation: Iqbal, M. Z., Garg, N., & Ahmed, S. B. (2025). Table Extraction with Table Data Using VGG-19 Deep Learning Model. Sensors, 25(1), 203. https://doi.org/10.3390/s25010203
Abstract: In recent years, significant progress has been achieved in understanding and processing tabular data. However, existing approaches often rely on task-specific features and model architectures, posing challenges in accurately extracting table structures amidst diverse layouts, styles, and noise contamination. This study introduces a comprehensive deep learning methodology that is tailored for the precise identification and extraction of rows and columns from document images that contain tables. The proposed model employs table detection and structure recognition to delineate table and column areas, followed by semantic rule-based approaches for row extraction within tabular sub-regions. The evaluation was performed on the publicly available Marmot data table datasets and demonstrates state-of-the-art performance. Additionally, transfer learning using VGG-19 is employed for fine-tuning the model, enhancing its capability further. Furthermore, this project fills a void in the Marmot dataset by providing it with extra annotations for table structure, expanding its scope to encompass column detection in addition to table identification.
URI: https://knowledgecommons.lakeheadu.ca/handle/2453/5459
Appears in Collections:Department of Computer Science

Files in This Item:
File Description SizeFormat 
Iqbal et al.2025-.Table_Extraction_with_Table_Data_Using_VGG-19_Deep_Learning_Model.pdf1.34 MBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.