The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
Deep Learning Theory and Software
Abstract
In the past decade, deep learning has achieved a significant breakthrough in development. In addition to the emergence of convolution, the most important is self-learning of deep neural networks. By self-learning methods, adaptive weights of kernels and built-in parameters or interconnections are automatically modified such that the error rate is reduced along the learning process, and the recognition rate is improved. Emulating mechanism of the brain, it can have accurate recognition ability after learning. One of the most important self-learning methods is back-propagation (BP). The current BP method is indeed a systematic way of calculating the gradient of the loss with respect to adaptive interconnections. The main core of the gradient descent method addresses on modifying the weights negatively proportional to the determined gradient of the loss function, subsequently reducing the error of the network response in comparison with the standard answer. The basic assumption for this type of the gradient-based self-learning is that the loss function is the first-order differential.
Related Content
Julián Sierra-Pérez, Joham Alvarez-Montoya.
© 2020.
40 pages.
|
Feyzan Saruhan-Ozdag, Derya Yiltas-Kaplan, Tolga Ensari.
© 2020.
18 pages.
|
Leonardo Juan Ramirez Lopez, Gabriel Alberto Puerta Aponte.
© 2020.
25 pages.
|
Jersson X. Leon-Medina, Maribel Anaya Vejar, Diego A. Tibaduiza.
© 2020.
25 pages.
|
Richard Isaac Abuabara, Felipe Díaz-Sánchez, Juliana Arevalo Herrera, Isabel Amigo.
© 2020.
22 pages.
|
Pragathi Penikalapati, A. Nagaraja Rao.
© 2020.
19 pages.
|
Nancy E. Ochoa Guevara, Andres Esteban Puerto Lara, Nelson F. Rosas Jimenez, Wilmar Calderón Torres, Laura M. Grisales García, Ángela M. Sánchez Ramos, Omar R. Moreno Cubides.
© 2020.
30 pages.
|
|
|