Page 1 |
Save page Remove page | Previous | 1 of 125 | Next |
|
small (250x250 max)
medium (500x500 max)
Large (1000x1000 max)
Extra Large
large ( > 500x500)
Full Resolution
All (PDF)
|
This page
All
|
ROBOT LIFE-LONG TASK LEARNING FROM HUMAN DEMONSTRATIONS: A BAYESIAN APPROACH by Nathan Koenig Ph.D. Dissertation May 2013 Guidance Committee Maja Matari c (Chairperson) Gaurav Sukhatme Rahul Jain (External Member)
Object Description
Title | Robot life-long task learning from human demonstrations: a Bayesian approach |
Author | Koenig, Nathan |
Author email | natekoenig@gmail.com;natekoenig@gmail.com |
Degree | Doctor of Philosophy |
Document type | Dissertation |
Degree program | Computer Science (Robotics and Automation) |
School | Viterbi School of Engineering |
Date defended/completed | 2012-09-11 |
Date submitted | 2013-02-26 |
Date approved | 2013-02-26 |
Restricted until | 2013-02-26 |
Date published | 2013-02-26 |
Advisor (committee chair) | Mataric, Maja J. |
Advisor (committee member) |
Sukhatme, Gaurav S. Jain, Rahul |
Abstract | Programming a robot to act intelligently is a challenging endeavor that is beyond the skill level of most people. Trained roboticists generally program robots for a single purpose. Enabling robots to be programmed by non-experts and to perform multiple tasks are both open challenges in robotics. The contributions of this work include a framework that allows a robot to learn tasks from demonstrations over the course of its functional lifetime, a task representation that uses Bayesian decision networks, and a method to transfer knowledge between similar tasks. The demonstration framework allows non-experts to demonstrate tasks to the robot in an intuitive manner. ❧ In this work, tasks are complex time-extended decision processes that make use of a set of predefined basis behaviors for actuator control. Demonstrations from an instructor provide the necessary information for the robot to learn a control policy. An instructor guides the robot through a demonstration using a graphical interface that displays information from the robot and provides an intuitive action-object pairing mechanism to issue commands to the robot. ❧ Each task is represented by an influence diagram, a generalization of Bayesian networks. The networks are human readable, compact, and have a simple refinement process. They are not subject to an exponential growth in states or in branches, and can be combined hierarchically, allowing for complex task models. Data from task demonstrations are used to learn the structure and utility functions of an influence diagram. A score-based learning algorithm is used to search through potential networks in order to find an optimal structure. ❧ Both the means by which demonstrations are provided to the robot and the learned tasks are validated. Different communication modalities and environmental factors are analyzed in a set of user studies. The studies feature both engineer and non-engineer users instructing the Willow Garage PR2 on four tasks: Tower's of Hanoi, box sorting, cooking risotto, and table setting. The results validate that the approach enables the robot to learn complex tasks from a variety of teachers, refining those tasks during on-line performance, successfully completing the tasks in different environments, and transferring knowledge from one task to another. |
Keyword | robotics; life-long learning; influence diagrams; bayesian networks; teaching; learning from demonstration |
Language | English |
Part of collection | University of Southern California dissertations and theses |
Publisher (of the original version) | University of Southern California |
Place of publication (of the original version) | Los Angeles, California |
Publisher (of the digital version) | University of Southern California. Libraries |
Provenance | Electronically uploaded by the author |
Type | texts |
Legacy record ID | usctheses-m |
Contributing entity | University of Southern California |
Rights | Koenig, Nathan |
Physical access | The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given. |
Repository name | University of Southern California Digital Library |
Repository address | USC Digital Library, University of Southern California, University Park Campus MC 7002, 106 University Village, Los Angeles, California 90089-7002, USA |
Repository email | cisadmin@lib.usc.edu |
Archival file | uscthesesreloadpub_Volume7/etd-KoenigNath-1451.pdf |
Description
Title | Page 1 |
Contributing entity | University of Southern California |
Repository email | cisadmin@lib.usc.edu |
Full text | ROBOT LIFE-LONG TASK LEARNING FROM HUMAN DEMONSTRATIONS: A BAYESIAN APPROACH by Nathan Koenig Ph.D. Dissertation May 2013 Guidance Committee Maja Matari c (Chairperson) Gaurav Sukhatme Rahul Jain (External Member) |