The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
A Theoretical Framework for Parallel Implementation of Deep Higher Order Neural Networks
Abstract
This chapter proposes a theoretical framework for parallel implementation of Deep Higher Order Neural Networks (HONNs). First, we develop a new partitioning approach for mapping HONNs to individual computers within a master-slave distributed system (a local area network). This will allow us to use a network of computers (rather than a single computer) to train a HONN to drastically increase its learning speed: all of the computers will be running the HONN simultaneously (parallel implementation). Next, we develop a new learning algorithm so that it can be used for HONN learning in a distributed system environment. Finally, we propose to improve the generalisation ability of the new learning algorithm as used in a distributed system environment. Theoretical analysis of the proposal is thoroughly conducted to verify the soundness of the new approach. Experiments will be performed to test the new algorithm in the future.
Related Content
Rahul Ratnakumar, Shilpa K., Satyasai Jagannath Nanda.
© 2023.
27 pages.
|
Parth Birthare, Maheswari Raja, Ganesan Ramachandran, Carol Anne Hargreaves, Shreya Birthare.
© 2023.
29 pages.
|
Raja G., Srinivasulu Reddy U..
© 2023.
22 pages.
|
Maheswari R., Pattabiraman Venkatasubbu, A. Saleem Raja.
© 2023.
19 pages.
|
Maheswari R., Prasanna Sundar Rao, Azath H., Vijanth S. Asirvadam.
© 2023.
26 pages.
|
Gayathri S. P., Siva Shankar Ramasamy, Vijayalakshmi S..
© 2023.
22 pages.
|
Chitra P..
© 2023.
15 pages.
|
|
|