The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
Moral Emotions for Autonomous Agents
Abstract
In this chapter we raise some of the moral issues involved in the current development of robotic autonomous agents. Starting from the connection between autonomy and responsibility, we distinguish two sorts of problems: those having to do with guaranteeing that the behavior of the artificial cognitive system is going to fall within the area of the permissible, and those having to do with endowing such systems with whatever abilities are required for engaging in moral interaction. Only in the second case can we speak of full blown autonomy, or moral autonomy. We illustrate the first type of case with Arkin’s proposal of a hybrid architecture for control of military robots. As for the second kind of case, that of full-blown autonomy, we argue that a motivational component is needed, to ground the self-orientation and the pattern of appraisal required, and outline how such motivational component might give rise to interaction in terms of moral emotions. We end suggesting limits to a straightforward analogy between natural and artificial cognitive systems from this standpoint.
Related Content
Bhargav Naidu Matcha, Sivakumar Sivanesan, K. C. Ng, Se Yong Eh Noum, Aman Sharma.
© 2023.
60 pages.
|
Lavanya Sendhilvel, Kush Diwakar Desai, Simran Adake, Rachit Bisaria, Hemang Ghanshyambhai Vekariya.
© 2023.
15 pages.
|
Jayanthi Ganapathy, Purushothaman R., Ramya M., Joselyn Diana C..
© 2023.
14 pages.
|
Prince Rajak, Anjali Sagar Jangde, Govind P. Gupta.
© 2023.
14 pages.
|
Mustafa Eren Akpınar.
© 2023.
9 pages.
|
Sreekantha Desai Karanam, Krithin M., R. V. Kulkarni.
© 2023.
34 pages.
|
Omprakash Nayak, Tejaswini Pallapothala, Govind P. Gupta.
© 2023.
19 pages.
|
|
|