The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
Reinforcement Learning with Reward Shaping and Mixed Resolution Function Approximation
Abstract
A crucial trade-off is involved in the design process when function approximation is used in reinforcement learning. Ideally the chosen representation should allow representing as close as possible an approximation of the value function. However, the more expressive the representation the more training data is needed because the space of candidate hypotheses is bigger. A less expressive representation has a smaller hypotheses space and a good candidate can be found faster. The core idea of this paper is the use of a mixed resolution function approximation, that is, the use of a less expressive function approximation to provide useful guidance during learning, and the use of a more expressive function approximation to obtain a final result of high quality. A major question is how to combine the two representations. Two approaches are proposed and evaluated empirically.
Related Content
Digvijay Pandey, Subodh Wairya.
© 2022.
11 pages.
|
Mohamed Merabet, Ali Kourtiche.
© 2022.
18 pages.
|
Upendra Kumar, Pawan Kumar Tiwari, Tejasvi Mishra, Lalita Jaiswar, Safiya Ali.
© 2022.
16 pages.
|
Stephen Opoku Oppong, Benjamin Ghansah, Evans Baidoo, Wilson Osafo Apeanti, Daniel Danso Essel.
© 2022.
26 pages.
|
Binay Kumar Pandey, Digvijay Pandey, Ashi Agarwal.
© 2022.
14 pages.
|
Oreoluwa Carolyn Tinubu, Adesina Simon Sodiya, Olusegun Ayodeji Ojesanmi, Emmanuel Oyeyemi Adeleke, Ahmad Alfawwaz Timehin.
© 2022.
15 pages.
|
Ishak H. A Meddah, Fatiha Guerroudji, Nour Elhouda Remil.
© 2022.
18 pages.
|
|
|