IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

An Integrated Framework for Robust Human-Robot Interaction

An Integrated Framework for Robust Human-Robot Interaction
View Sample PDF
Author(s): Mohan Sridharan (Texas Tech University, USA)
Copyright: 2014
Pages: 21
Source title: Robotics: Concepts, Methodologies, Tools, and Applications
Source Author(s)/Editor(s): Information Resources Management Association (USA)
DOI: 10.4018/978-1-4666-4607-0.ch060

Purchase

View An Integrated Framework for Robust Human-Robot Interaction on the publisher's website for pricing and purchasing information.

Abstract

Developments in sensor technology and sensory input processing algorithms have enabled the use of mobile robots in real-world domains. As they are increasingly deployed to interact with humans in our homes and offices, robots need the ability to operate autonomously based on sensory cues and high-level feedback from non-expert human participants. Towards this objective, this chapter describes an integrated framework that jointly addresses the learning, adaptation, and interaction challenges associated with robust human-robot interaction in real-world application domains. The novel probabilistic framework consists of: (a) a bootstrap learning algorithm that enables a robot to learn layered graphical models of environmental objects and adapt to unforeseen dynamic changes; (b) a hierarchical planning algorithm based on partially observable Markov decision processes (POMDPs) that enables the robot to reliably and efficiently tailor learning, sensing, and processing to the task at hand; and (c) an augmented reinforcement learning algorithm that enables the robot to acquire limited high-level feedback from non-expert human participants, and merge human feedback with the information extracted from sensory cues. Instances of these algorithms are implemented and fully evaluated on mobile robots and in simulated domains using vision as the primary source of information in conjunction with range data and simplistic verbal inputs. Furthermore, a strategy is outlined to integrate these components to achieve robust human-robot interaction in real-world application domains.

Related Content

Rashmi Rani Samantaray, Zahira Tabassum, Abdul Azeez. © 2024. 32 pages.
Sanjana Prasad, Deepashree Rajendra Prasad. © 2024. 25 pages.
Deepak Varadam, Sahana P. Shankar, Aryan Bharadwaj, Tanvi Saxena, Sarthak Agrawal, Shraddha Dayananda. © 2024. 24 pages.
Tarun Kumar Vashishth, Vikas Sharma, Kewal Krishan Sharma, Bhupendra Kumar, Sachin Chaudhary, Rajneesh Panwar. © 2024. 29 pages.
Mrutyunjaya S. Hiremath, Rajashekhar C. Biradar. © 2024. 30 pages.
C. L. Chayalakshmi, Mahabaleshwar S. Kakkasageri, Rajani S. Pujar, Nayana Hegde. © 2024. 30 pages.
Amit Kumar Tyagi. © 2024. 29 pages.
Body Bottom