IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Boosting Convolutional Neural Networks Using a Bidirectional Fast Gated Recurrent Unit for Text Categorization

Boosting Convolutional Neural Networks Using a Bidirectional Fast Gated Recurrent Unit for Text Categorization
View Sample PDF
Author(s): Assia Belherazem (SIMPA Laboratory, Université des Sciences et de la Technologie d'Oran Mohamed Boudiaf, Algeria)and Redouane Tlemsani (Université des Sciences et de la Technologie d'Oran Mohamed Boudiaf, Algeria)
Copyright: 2022
Volume: 12
Issue: 1
Pages: 20
Source title: International Journal of Artificial Intelligence and Machine Learning (IJAIML)
Editor(s)-in-Chief: Maki K. Habib (The American University in Cairo, Egypt)
DOI: 10.4018/IJAIML.308815

Purchase


Abstract

This paper proposes a hybrid text classification model that combines 1D CNN with a single Bidirectional Fast GRU (BiFaGRU) termed as CNN-BiFaGRU. Single convolution layer captures features through a kernel applying 128 filters which are slide over these embeds to find convolutions and drop entire 1D feature maps by using Spatial Dropout, combined vectors using Max-Pooling layer. Then, the Bidirectional CUDNNGRU block to extract temporal features, results of this layer is normalize by the Batch Normalization layer and transmitted to the Fully Connected Layer. The output layer produces the final classification results. Precision/loss score was used as the main criterion on five different datasets (WebKb, R8, R52, AG-News, and 20 NG) to assess the performance of the proposed model. The results indicate that the precision score of the classifier on WebKb, R8, and R52 data sets significantly improved from 90% up to 97% compared to the best result achieved by other methods such as LSTM and Bi-LSTM. Thus, the proposed model shows higher precision and lower loss scores than other methods.

Related Content

Priyal J. Borole. © 2024. 11 pages.
Richard S. Segall, Vidhya Sankarasubbu. © 2022. 28 pages.
Assia Belherazem, Redouane Tlemsani. © 2022. 20 pages.
Hironori Hiraishi. © 2022. 13 pages.
Richard S. Segall, Vidhya Sankarasubbu. © 2022. 30 pages.
Mostefai Abdelkader, Mekour Mansour. © 2021. 14 pages.
Nagabushanam Perattur, S. Thomas George, D. Raveena Judie Dolly, Radha Subramanyam. © 2021. 8 pages.
Body Bottom