IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

A Heterogeneous AdaBoost Ensemble Based Extreme Learning Machines for Imbalanced Data

A Heterogeneous AdaBoost Ensemble Based Extreme Learning Machines for Imbalanced Data
View Sample PDF
Author(s): Adnan Omer Abuassba (University of Science and Technology Beijing (USTB), Beijing, China & Arab Open University - Palestine, Ramallah, Palestine), Dezheng Zhang (University of Science and Technology Beijing (USTB), Beijing, China)and Xiong Luo (University of Science and Technology Beijing (USTB), Beijing, China)
Copyright: 2022
Pages: 18
Source title: Research Anthology on Machine Learning Techniques, Methods, and Applications
Source Author(s)/Editor(s): Information Resources Management Association (USA)
DOI: 10.4018/978-1-6684-6291-1.ch030

Purchase

View A Heterogeneous AdaBoost Ensemble Based Extreme Learning Machines for Imbalanced Data on the publisher's website for pricing and purchasing information.

Abstract

Extreme learning machine (ELM) is an effective learning algorithm for the single hidden layer feed-forward neural network (SLFN). It is diversified in the form of kernels or feature mapping functions, while achieving a good learning performance. It is agile in learning and often has good performance, including kernel ELM and Regularized ELM. Dealing with imbalanced data has been a long-term focus for the learning algorithms to achieve satisfactory analytical results. It is obvious that the unbalanced class distribution imposes very challenging obstacles to implement learning tasks in real-world applications, including online visual tracking and image quality assessment. This article addresses this issue through advanced diverse AdaBoost based ELM ensemble (AELME) for imbalanced binary and multiclass data classification. This article aims to improve classification accuracy of the imbalanced data. In the proposed method, the ensemble is developed while splitting the trained data into corresponding subsets. And different algorithms of enhanced ELM, including regularized ELM and kernel ELM, are used as base learners, so that an active learner is constructed from a group of relatively weak base learners. Furthermore, AELME is implemented by training a randomly selected ELM classifier on a subset, chosen by random re-sampling. Then, the labels of unseen data could be predicted using the weighting approach. AELME is validated through classification on real-world benchmark datasets.

Related Content

Princy Pappachan, Sreerakuvandana, Mosiur Rahaman. © 2024. 26 pages.
Winfred Yaokumah, Charity Y. M. Baidoo, Ebenezer Owusu. © 2024. 23 pages.
Mario Casillo, Francesco Colace, Brij B. Gupta, Francesco Marongiu, Domenico Santaniello. © 2024. 25 pages.
Suchismita Satapathy. © 2024. 19 pages.
Xinyi Gao, Minh Nguyen, Wei Qi Yan. © 2024. 13 pages.
Mario Casillo, Francesco Colace, Brij B. Gupta, Angelo Lorusso, Domenico Santaniello, Carmine Valentino. © 2024. 30 pages.
Pratyay Das, Amit Kumar Shankar, Ahona Ghosh, Sriparna Saha. © 2024. 32 pages.
Body Bottom